Job:
#OCPBUGS-32517issue39 hours agoMissing worker nodes on metal Verified
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[12603]: Unpause all baremetal hosts
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.630867   18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused
Mon 2024-04-22 05:33:53 UTC localhost.localdomain master-bmh-update.service[18264]: E0422 05:33:53.631351   18264 memcache.go:265] couldn't get current server API group list: Get "https://localhost:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused

... 4 lines not shown

#OCPBUGS-27755issue9 days agoopenshift-kube-apiserver down and is not being restarted New
Issue 15736514: openshift-kube-apiserver down and is not being restarted
Description: Description of problem:
 {code:none}
 SNO cluster, this is the second time that the issue happens. 
 
 Error like the following are reported:
 
 ~~~
 failed to fetch token: Post "https://api-int.<cluster>:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp <ip>:6443: connect: connection refused
 ~~~
 
 Checking the pods logs, kube-apiserver pod is terminated and is not being restarted again:
 
 ~~~
 2024-01-13T09:41:40.931716166Z I0113 09:41:40.931584       1 main.go:213] Received signal terminated. Forwarding to sub-process "hyperkube".
 ~~~{code}
 Version-Release number of selected component (if applicable):
 {code:none}
    4.13.13 {code}
 How reproducible:
 {code:none}
     Not reproducible but has happened twice{code}
 Steps to Reproduce:
 {code:none}
     1.
     2.
     3.
     {code}
 Actual results:
 {code:none}
     API is not available and kube-apiserver is not being restarted{code}
 Expected results:
 {code:none}
     We would expect to see kube-apiserver restarts{code}
 Additional info:
 {code:none}
    {code}
Status: New
#OCPBUGS-30631issue2 weeks agoSNO (RT kernel) sosreport crash the SNO node CLOSED
Issue 15865131: SNO (RT kernel) sosreport crash the SNO node
Description: Description of problem:
 {code:none}
 sosreport collection causes SNO XR11 node crash.
 {code}
 Version-Release number of selected component (if applicable):
 {code:none}
 - RHOCP    : 4.12.30
 - kernel   : 4.18.0-372.69.1.rt7.227.el8_6.x86_64
 - platform : x86_64{code}
 How reproducible:
 {code:none}
 sh-4.4# chrt -rr 99 toolbox
 .toolboxrc file detected, overriding defaults...
 Checking if there is a newer version of ocpdalmirror.xxx.yyy:8443/rhel8/support-tools-zzz-feb available...
 Container 'toolbox-root' already exists. Trying to start...
 (To remove the container and start with a fresh toolbox, run: sudo podman rm 'toolbox-root')
 toolbox-root
 Container started successfully. To exit, type 'exit'.
 [root@node /]# which sos
 /usr/sbin/sos
 logger: socket /dev/log: No such file or directory
 [root@node /]# taskset -c 29-31,61-63 sos report --batch -n networking,kernel,processor -k crio.all=on -k crio.logs=on -k podman.all=on -kpodman.logs=on
 
 sosreport (version 4.5.6)
 
 This command will collect diagnostic and configuration information from
 this Red Hat CoreOS system.
 
 An archive containing the collected information will be generated in
 /host/var/tmp/sos.c09e4f7z and may be provided to a Red Hat support
 representative.
 
 Any information provided to Red Hat will be treated in accordance with
 the published support policies at:
 
         Distribution Website : https://www.redhat.com/
         Commercial Support   : https://access.redhat.com/
 
 The generated archive may contain data considered sensitive and its
 content should be reviewed by the originating organization before being
 passed to any third party.
 
 No changes will be made to system configuration.
 
 
  Setting up archive ...
  Setting up plugins ...
 [plugin:auditd] Could not open conf file /etc/audit/auditd.conf: [Errno 2] No such file or directory: '/etc/audit/auditd.conf'
 caught exception in plugin method "system.setup()"
 writing traceback to sos_logs/system-plugin-errors.txt
 [plugin:systemd] skipped command 'resolvectl status': required services missing: systemd-resolved.
 [plugin:systemd] skipped command 'resolvectl statistics': required services missing: systemd-resolved.
  Running plugins. Please wait ...
 
   Starting 1/91  alternatives    [Running: alternatives]
   Starting 2/91  atomichost      [Running: alternatives atomichost]
   Starting 3/91  auditd          [Running: alternatives atomichost auditd]
   Starting 4/91  block           [Running: alternatives atomichost auditd block]
   Starting 5/91  boot            [Running: alternatives auditd block boot]
   Starting 6/91  cgroups         [Running: auditd block boot cgroups]
   Starting 7/91  chrony          [Running: auditd block cgroups chrony]
   Starting 8/91  cifs            [Running: auditd block cgroups cifs]
   Starting 9/91  conntrack       [Running: auditd block cgroups conntrack]
   Starting 10/91 console         [Running: block cgroups conntrack console]
   Starting 11/91 container_log   [Running: block cgroups conntrack container_log]
   Starting 12/91 containers_common [Running: block cgroups conntrack containers_common]
   Starting 13/91 crio            [Running: block cgroups conntrack crio]
   Starting 14/91 crypto          [Running: cgroups conntrack crio crypto]
   Starting 15/91 date            [Running: cgroups conntrack crio date]
   Starting 16/91 dbus            [Running: cgroups conntrack crio dbus]
   Starting 17/91 devicemapper    [Running: cgroups conntrack crio devicemapper]
   Starting 18/91 devices         [Running: cgroups conntrack crio devices]
   Starting 19/91 dracut          [Running: cgroups conntrack crio dracut]
   Starting 20/91 ebpf            [Running: cgroups conntrack crio ebpf]
   Starting 21/91 etcd            [Running: cgroups crio ebpf etcd]
   Starting 22/91 filesys         [Running: cgroups crio ebpf filesys]
   Starting 23/91 firewall_tables [Running: cgroups crio filesys firewall_tables]
   Starting 24/91 fwupd           [Running: cgroups crio filesys fwupd]
   Starting 25/91 gluster         [Running: cgroups crio filesys gluster]
   Starting 26/91 grub2           [Running: cgroups crio filesys grub2]
   Starting 27/91 gssproxy        [Running: cgroups crio grub2 gssproxy]
   Starting 28/91 hardware        [Running: cgroups crio grub2 hardware]
   Starting 29/91 host            [Running: cgroups crio hardware host]
   Starting 30/91 hts             [Running: cgroups crio hardware hts]
   Starting 31/91 i18n            [Running: cgroups crio hardware i18n]
   Starting 32/91 iscsi           [Running: cgroups crio hardware iscsi]
   Starting 33/91 jars            [Running: cgroups crio hardware jars]
   Starting 34/91 kdump           [Running: cgroups crio hardware kdump]
   Starting 35/91 kernelrt        [Running: cgroups crio hardware kernelrt]
   Starting 36/91 keyutils        [Running: cgroups crio hardware keyutils]
   Starting 37/91 krb5            [Running: cgroups crio hardware krb5]
   Starting 38/91 kvm             [Running: cgroups crio hardware kvm]
   Starting 39/91 ldap            [Running: cgroups crio kvm ldap]
   Starting 40/91 libraries       [Running: cgroups crio kvm libraries]
   Starting 41/91 libvirt         [Running: cgroups crio kvm libvirt]
   Starting 42/91 login           [Running: cgroups crio kvm login]
   Starting 43/91 logrotate       [Running: cgroups crio kvm logrotate]
   Starting 44/91 logs            [Running: cgroups crio kvm logs]
   Starting 45/91 lvm2            [Running: cgroups crio logs lvm2]
   Starting 46/91 md              [Running: cgroups crio logs md]
   Starting 47/91 memory          [Running: cgroups crio logs memory]
   Starting 48/91 microshift_ovn  [Running: cgroups crio logs microshift_ovn]
   Starting 49/91 multipath       [Running: cgroups crio logs multipath]
   Starting 50/91 networkmanager  [Running: cgroups crio logs networkmanager]
 
 Removing debug pod ...
 error: unable to delete the debug pod "ransno1ransnomavdallabcom-debug": Delete "https://api.ransno.mavdallab.com:6443/api/v1/namespaces/openshift-debug-mt82m/pods/ransno1ransnomavdallabcom-debug": dial tcp 10.71.136.144:6443: connect: connection refused
 {code}
 Steps to Reproduce:
 {code:none}
 Launch a debug pod and the procedure above and it crash the node{code}
 Actual results:
 {code:none}
 Node crash{code}
 Expected results:
 {code:none}
 Node does not crash{code}
 Additional info:
 {code:none}
 We have two vmcore on the associated SFDC ticket.
 This system use a RT kernel.
 Using an out of tree ice driver 1.13.7 (probably from 22 dec 2023)
 
 [  103.681608] ice: module unloaded
 [  103.830535] ice: loading out-of-tree module taints kernel.
 [  103.831106] ice: module verification failed: signature and/or required key missing - tainting kernel
 [  103.841005] ice: Intel(R) Ethernet Connection E800 Series Linux Driver - version 1.13.7
 [  103.841017] ice: Copyright (C) 2018-2023 Intel Corporation
 
 
 With the following kernel command line 
 
 Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/vmlinuz-4.18.0-372.69.1.rt7.227.el8_6.x86_64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/0 root=UUID=3e8bda80-5cf4-4c46-b139-4c84cb006354 rw rootflags=prjquota boot=UUID=1d0512c2-3f92-42c5-b26d-709ff9350b81 intel_iommu=on iommu=pt firmware_class.path=/var/lib/firmware skew_tick=1 nohz=on rcu_nocbs=3-31,35-63 tuned.non_isolcpus=00000007,00000007 systemd.cpu_affinity=0,1,2,32,33,34 intel_iommu=on iommu=pt isolcpus=managed_irq,3-31,35-63 nohz_full=3-31,35-63 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off rcutree.kthread_prio=11 default_hugepagesz=1G rcupdate.rcu_normal_after_boot=0 efi=runtime module_blacklist=irdma intel_pstate=passive intel_idle.max_cstate=0 crashkernel=256M
 
 
 
 vmcore1 show issue with the ice driver 
 
 crash vmcore tmp/vmlinux
 
 
       KERNEL: tmp/vmlinux  [TAINTED]
     DUMPFILE: vmcore  [PARTIAL DUMP]
         CPUS: 64
         DATE: Thu Mar  7 17:16:57 CET 2024
       UPTIME: 02:44:28
 LOAD AVERAGE: 24.97, 25.47, 25.46
        TASKS: 5324
     NODENAME: aaa.bbb.ccc
      RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64
      VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023
      MACHINE: x86_64  (1500 Mhz)
       MEMORY: 127.3 GB
        PANIC: "Kernel panic - not syncing:"
          PID: 693
      COMMAND: "khungtaskd"
         TASK: ff4d1890260d4000  [THREAD_INFO: ff4d1890260d4000]
          CPU: 0
        STATE: TASK_RUNNING (PANIC)
 
 crash> ps|grep sos                                                                                                                                                                                                                                                                                                           
   449071  363440  31  ff4d189005f68000  IN   0.2  506428 314484  sos                                                                                                                                                                                                                                                         
   451043  363440  63  ff4d188943a9c000  IN   0.2  506428 314484  sos                                                                                                                                                                                                                                                         
   494099  363440  29  ff4d187f941f4000  UN   0.2  506428 314484  sos     
 
  8457.517696] ------------[ cut here ]------------
 [ 8457.517698] NETDEV WATCHDOG: ens3f1 (ice): transmit queue 35 timed out
 [ 8457.517711] WARNING: CPU: 33 PID: 349 at net/sched/sch_generic.c:472 dev_watchdog+0x270/0x300
 [ 8457.517718] Modules linked in: binfmt_misc macvlan pci_pf_stub iavf vfio_pci vfio_virqfd vfio_iommu_type1 vfio vhost_net vhost vhost_iotlb tap tun xt_addrtype nf_conntrack_netlink ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_nat xt_CT tcp_diag inet_diag ip6t_MASQUERADE xt_mark ice(OE) xt_conntrack ipt_MASQUERADE nft_counter xt_comment nft_compat veth nft_chain_nat nf_tables overlay bridge 8021q garp mrp stp llc nfnetlink_cttimeout nfnetlink openvswitch nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ext4 mbcache jbd2 intel_rapl_msr iTCO_wdt iTCO_vendor_support dell_smbios wmi_bmof dell_wmi_descriptor dcdbas kvm_intel kvm irqbypass intel_rapl_common i10nm_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp rapl ipmi_ssif intel_cstate intel_uncore dm_thin_pool pcspkr isst_if_mbox_pci dm_persistent_data dm_bio_prison dm_bufio isst_if_mmio isst_if_common mei_me i2c_i801 joydev mei intel_pmt wmi acpi_ipmi ipmi_si acpi_power_meter sctp ip6_udp_tunnel
 [ 8457.517770]  udp_tunnel ip_tables xfs libcrc32c i40e sd_mod t10_pi sg bnxt_re ib_uverbs ib_core crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel bnxt_en ahci libahci libata dm_multipath dm_mirror dm_region_hash dm_log dm_mod ipmi_devintf ipmi_msghandler fuse [last unloaded: ice]
 [ 8457.517784] Red Hat flags: eBPF/rawtrace
 [ 8457.517787] CPU: 33 PID: 349 Comm: ktimers/33 Kdump: loaded Tainted: G           OE    --------- -  - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1
 [ 8457.517789] Hardware name: Dell Inc. PowerEdge XR11/0P2RNT, BIOS 1.12.1 09/13/2023
 [ 8457.517790] RIP: 0010:dev_watchdog+0x270/0x300
 [ 8457.517793] Code: 17 00 e9 f0 fe ff ff 4c 89 e7 c6 05 c6 03 34 01 01 e8 14 43 fa ff 89 d9 4c 89 e6 48 c7 c7 90 37 98 9a 48 89 c2 e8 1d be 88 ff <0f> 0b eb ad 65 8b 05 05 13 fb 65 89 c0 48 0f a3 05 1b ab 36 01 73
 [ 8457.517795] RSP: 0018:ff7aeb55c73c7d78 EFLAGS: 00010286
 [ 8457.517797] RAX: 0000000000000000 RBX: 0000000000000023 RCX: 0000000000000001
 [ 8457.517798] RDX: 0000000000000000 RSI: ffffffff9a908557 RDI: 00000000ffffffff
 [ 8457.517799] RBP: 0000000000000021 R08: ffffffff9ae6b3a0 R09: 00080000000000ff
 [ 8457.517800] R10: 000000006443a462 R11: 0000000000000036 R12: ff4d187f4d1f4000
 [ 8457.517801] R13: ff4d187f4d20df00 R14: ff4d187f4d1f44a0 R15: 0000000000000080
 [ 8457.517803] FS:  0000000000000000(0000) GS:ff4d18967a040000(0000) knlGS:0000000000000000
 [ 8457.517804] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 [ 8457.517805] CR2: 00007fc47c649974 CR3: 00000019a441a005 CR4: 0000000000771ea0
 [ 8457.517806] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
 [ 8457.517807] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
 [ 8457.517808] PKRU: 55555554
 [ 8457.517810] Call Trace:
 [ 8457.517813]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517816]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517818]  call_timer_fn+0x32/0x1d0
 [ 8457.517822]  ? test_ti_thread_flag.constprop.50+0x10/0x10
 [ 8457.517825]  run_timer_softirq+0x1fc/0x640
 [ 8457.517828]  ? _raw_spin_unlock_irq+0x1d/0x60
 [ 8457.517833]  ? finish_task_switch+0xea/0x320
 [ 8457.517836]  ? __switch_to+0x10c/0x4d0
 [ 8457.517840]  __do_softirq+0xa5/0x33f
 [ 8457.517844]  run_timersd+0x61/0xb0
 [ 8457.517848]  smpboot_thread_fn+0x1c1/0x2b0
 [ 8457.517851]  ? smpboot_register_percpu_thread_cpumask+0x140/0x140
 [ 8457.517853]  kthread+0x151/0x170
 [ 8457.517856]  ? set_kthread_struct+0x50/0x50
 [ 8457.517858]  ret_from_fork+0x1f/0x40
 [ 8457.517861] ---[ end trace 0000000000000002 ]---
 [ 8458.520445] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0x99, HW_HEAD: 0x14, NTU: 0x15, INT: 0x0
 [ 8458.520451] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35
 [ 8506.139246] ice 0000:8a:00.1: PTP reset successful
 [ 8506.437047] ice 0000:8a:00.1: VSI rebuilt. VSI index 0, type ICE_VSI_PF
 [ 8506.445482] ice 0000:8a:00.1: VSI rebuilt. VSI index 1, type ICE_VSI_CTRL
 [ 8540.459707] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0xe3, HW_HEAD: 0xe7, NTU: 0xe8, INT: 0x0
 [ 8540.459714] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35
 [ 8563.891356] ice 0000:8a:00.1: PTP reset successful
 ~~~
 
 Second vmcore on the same node show issue with the SSD drive
 
 $ crash vmcore-2 tmp/vmlinux
 
       KERNEL: tmp/vmlinux  [TAINTED]
     DUMPFILE: vmcore-2  [PARTIAL DUMP]
         CPUS: 64
         DATE: Thu Mar  7 14:29:31 CET 2024
       UPTIME: 1 days, 07:19:52
 LOAD AVERAGE: 25.55, 26.42, 28.30
        TASKS: 5409
     NODENAME: aaa.bbb.ccc
      RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64
      VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023
      MACHINE: x86_64  (1500 Mhz)
       MEMORY: 127.3 GB
        PANIC: "Kernel panic - not syncing:"
          PID: 696
      COMMAND: "khungtaskd"
         TASK: ff2b35ed48d30000  [THREAD_INFO: ff2b35ed48d30000]
          CPU: 34
        STATE: TASK_RUNNING (PANIC)
 
 crash> ps |grep sos
   719784  718369  62  ff2b35ff00830000  IN   0.4 1215636 563388  sos
   721740  718369  61  ff2b3605579f8000  IN   0.4 1215636 563388  sos
   721742  718369  63  ff2b35fa5eb9c000  IN   0.4 1215636 563388  sos
   721744  718369  30  ff2b3603367fc000  IN   0.4 1215636 563388  sos
   721746  718369  29  ff2b360557944000  IN   0.4 1215636 563388  sos
   743356  718369  62  ff2b36042c8e0000  IN   0.4 1215636 563388  sos
   743818  718369  29  ff2b35f6186d0000  IN   0.4 1215636 563388  sos
   748518  718369  61  ff2b3602cfb84000  IN   0.4 1215636 563388  sos
   748884  718369  62  ff2b360713418000  UN   0.4 1215636 563388  sos
 
 crash> dmesg
 
 [111871.309883] ata3.00: exception Emask 0x0 SAct 0x3ff8 SErr 0x0 action 0x6 frozen
 [111871.309889] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309891] ata3.00: cmd 61/40:18:28:47:4b/00:00:00:00:00/40 tag 3 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309895] ata3.00: status: { DRDY }
 [111871.309897] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309904] ata3.00: cmd 61/40:20:68:47:4b/00:00:00:00:00/40 tag 4 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309908] ata3.00: status: { DRDY }
 [111871.309909] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309910] ata3.00: cmd 61/40:28:a8:47:4b/00:00:00:00:00/40 tag 5 ncq dma 32768 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309913] ata3.00: status: { DRDY }
 [111871.309914] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309915] ata3.00: cmd 61/40:30:e8:47:4b/00:00:00:00:00/40 tag 6 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309918] ata3.00: status: { DRDY }
 [111871.309919] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309919] ata3.00: cmd 61/70:38:48:37:2b/00:00:1c:00:00/40 tag 7 ncq dma 57344 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309922] ata3.00: status: { DRDY }
 [111871.309923] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309924] ata3.00: cmd 61/20:40:78:29:0c/00:00:19:00:00/40 tag 8 ncq dma 16384 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309927] ata3.00: status: { DRDY }
 [111871.309928] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309929] ata3.00: cmd 61/08:48:08:0c:c0/00:00:1c:00:00/40 tag 9 ncq dma 4096 out
                          res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309932] ata3.00: status: { DRDY }
 [111871.309933] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309934] ata3.00: cmd 61/40:50:28:48:4b/00:00:00:00:00/40 tag 10 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309937] ata3.00: status: { DRDY }
 [111871.309938] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309939] ata3.00: cmd 61/40:58:68:48:4b/00:00:00:00:00/40 tag 11 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309942] ata3.00: status: { DRDY }
 [111871.309943] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309944] ata3.00: cmd 61/40:60:a8:48:4b/00:00:00:00:00/40 tag 12 ncq dma 32768 out
                          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
 [111871.309946] ata3.00: status: { DRDY }
 [111871.309947] ata3.00: failed command: WRITE FPDMA QUEUED
 [111871.309948] ata3.00: cmd 61/40:68:e8:48:4b/00:00:00:00:00/40 tag 13 ncq dma 32768 out
                          res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
 [111871.309951] ata3.00: status: { DRDY }
 [111871.309953] ata3: hard resetting link
 ...
 ...
 ...
 [112789.787310] INFO: task sos:748884 blocked for more than 600 seconds.                                                                                                                                                                                                                                                     
 [112789.787314]       Tainted: G           OE    --------- -  - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1                                                                                                                                                                                                                      
 [112789.787316] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.                                                                                                                                                                                                                                    
 [112789.787316] task:sos             state:D stack:    0 pid:748884 ppid:718369 flags:0x00084080                                                                                                                                                                                                                             
 [112789.787320] Call Trace:                                                                                                                                                                                                                                                                                                  
 [112789.787323]  __schedule+0x37b/0x8e0                                                                                                                                                                                                                                                                                      
 [112789.787330]  schedule+0x6c/0x120                                                                                                                                                                                                                                                                                         
 [112789.787333]  schedule_timeout+0x2b7/0x410                                                                                                                                                                                                                                                                                
 [112789.787336]  ? enqueue_entity+0x130/0x790                                                                                                                                                                                                                                                                                
 [112789.787340]  wait_for_completion+0x84/0xf0                                                                                                                                                                                                                                                                               
 [112789.787343]  flush_work+0x120/0x1d0                                                                                                                                                                                                                                                                                      
 [112789.787347]  ? flush_workqueue_prep_pwqs+0x130/0x130                                                                                                                                                                                                                                                                     
 [112789.787350]  schedule_on_each_cpu+0xa7/0xe0                                                                                                                                                                                                                                                                              
 [112789.787353]  vmstat_refresh+0x22/0xa0                                                                                                                                                                                                                                                                                    
 [112789.787357]  proc_sys_call_handler+0x174/0x1d0                                                                                                                                                                                                                                                                           
 [112789.787361]  vfs_read+0x91/0x150                                                                                                                                                                                                                                                                                         
 [112789.787364]  ksys_read+0x52/0xc0                                                                                                                                                                                                                                                                                         
 [112789.787366]  do_syscall_64+0x87/0x1b0                                                                                                                                                                                                                                                                                    
 [112789.787369]  entry_SYSCALL_64_after_hwframe+0x61/0xc6                                                                                                                                                                                                                                                                    
 [112789.787372] RIP: 0033:0x7f2dca8c2ab4                                                                                                                                                                                                                                                                                     
 [112789.787378] Code: Unable to access opcode bytes at RIP 0x7f2dca8c2a8a.                                                                                                                                                                                                                                                   
 [112789.787378] RSP: 002b:00007f2dbbffc5e0 EFLAGS: 00000246 ORIG_RAX: 0000000000000000                                                                                                                                                                                                                                       
 [112789.787380] RAX: ffffffffffffffda RBX: 0000000000000008 RCX: 00007f2dca8c2ab4                                                                                                                                                                                                                                            
 [112789.787382] RDX: 0000000000004000 RSI: 00007f2db402b5a0 RDI: 0000000000000008                                                                                                                                                                                                                                            
 [112789.787383] RBP: 00007f2db402b5a0 R08: 0000000000000000 R09: 00007f2dcace27bb                                                                                                                                                                                                                                            
 [112789.787383] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000004000                                                                                                                                                                                                                                            
 [112789.787384] R13: 0000000000000008 R14: 00007f2db402b5a0 R15: 00007f2da4001a90                                                                                                                                                                                                                                            
 [112789.787418] NMI backtrace for cpu 34    {code}
Status: CLOSED
#OCPBUGS-33157issue39 hours agoIPv6 metal-ipi jobs: master-bmh-update loosing access to API Verified
Issue 15978085: IPv6 metal-ipi jobs: master-bmh-update loosing access to API
Description: The last 4 IPv6 jobs are failing on the same error
 
 https://prow.ci.openshift.org/job-history/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6
 master-bmh-update.log looses access to the the API when trying to get/update the BMH details
 
 https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6/1785492737169035264
 
 
 
 {noformat}
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[4663]: Waiting for 3 masters to become provisioned
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531242   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.531808   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533281   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.533630   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: E0501 03:32:23.535180   24484 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
 May 01 03:32:23 localhost.localdomain master-bmh-update.sh[24484]: The connection to the server api-int.ostest.test.metalkube.org:6443 was refused - did you specify the right host or port?
 {noformat}
Status: Verified
{noformat}
May 01 02:49:40 localhost.localdomain master-bmh-update.sh[12448]: E0501 02:49:40.429468   12448 memcache.go:265] couldn't get current server API group list: Get "https://api-int.ostest.test.metalkube.org:6443/api?timeout=32s": dial tcp [fd2e:6f44:5dd8:c956::5]:6443: connect: connection refused
{noformat}
#OCPBUGS-32375issue10 days agoUnsuccessful cluster installation with 4.15 nightlies on s390x using ABI CLOSED
Issue 15945005: Unsuccessful cluster installation with 4.15 nightlies on s390x using ABI
Description: When used the latest s390x release builds in 4.15 nightly stream for Agent Based Installation of SNO on IBM Z KVM, installation is failing at the end while watching cluster operators even though the DNS and HA Proxy configurations are perfect as the same setup is working with 4.15.x stable release image builds 
 
 Below is the error encountered multiple times when used "release:s390x-latest" image while booting the cluster. This image is used during the boot through OPENSHIFT_INSATLL_RELEASE_IMAGE_OVERRIDE while the binary is fetched using the latest stable builds from here : [https://mirror.openshift.com/pub/openshift-v4/s390x/clients/ocp/latest/] for which the version would be around 4.15.x 
 
 *release-image:*
 {code:java}
 registry.build01.ci.openshift.org/ci-op-cdkdqnqn/release@sha256:c6eb4affa5c44d2ad220d7064e92270a30df5f26d221e35664f4d5547a835617
 {code}
  ** 
 
 *PROW CI Build :* [https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_release/47965/rehearse-47965-periodic-ci-openshift-multiarch-master-nightly-4.15-e2e-agent-ibmz-sno/1780162365824700416] 
 
 *Error:* 
 {code:java}
 '/root/agent-sno/openshift-install wait-for install-complete --dir /root/agent-sno/ --log-level debug'
 Warning: Permanently added '128.168.142.71' (ED25519) to the list of known hosts.
 level=debug msg=OpenShift Installer 4.15.8
 level=debug msg=Built from commit f4f5d0ee0f7591fd9ddf03ac337c804608102919
 level=debug msg=Loading Install Config...
 level=debug msg=  Loading SSH Key...
 level=debug msg=  Loading Base Domain...
 level=debug msg=    Loading Platform...
 level=debug msg=  Loading Cluster Name...
 level=debug msg=    Loading Base Domain...
 level=debug msg=    Loading Platform...
 level=debug msg=  Loading Pull Secret...
 level=debug msg=  Loading Platform...
 level=debug msg=Loading Agent Config...
 level=debug msg=Using Agent Config loaded from state file
 level=warning msg=An agent configuration was detected but this command is not the agent wait-for command
 level=info msg=Waiting up to 40m0s (until 10:15AM UTC) for the cluster at https://api.agent-sno.abi-ci.com:6443 to initialize...
 W0416 09:35:51.793770    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:51.793827    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:35:53.127917    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:53.127946    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:35:54.760896    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:35:54.761058    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:00.790136    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:00.790175    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:08.516333    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:08.516445    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:36:31.442291    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:36:31.442336    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:37:03.033971    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:37:03.034049    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:37:42.025487    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:37:42.025538    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:38:32.148607    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:38:32.148677    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:39:27.680156    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:39:27.680194    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:40:23.290839    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:40:23.290988    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:41:22.298200    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:41:22.298338    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:42:01.197417    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:42:01.197465    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:42:36.739577    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:42:36.739937    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:43:07.331029    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:43:07.331154    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:44:04.008310    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:44:04.008381    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:44:40.882938    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:44:40.882973    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:45:18.975189    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:45:18.975307    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:45:49.753584    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:45:49.753614    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:46:41.148207    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:46:41.148347    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:47:12.882965    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:47:12.883075    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:47:53.636491    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:47:53.636538    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:48:31.792077    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:48:31.792165    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:49:29.117579    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:49:29.117657    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:50:02.802033    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:50:02.802167    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:50:33.826705    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:50:33.826859    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:51:16.045403    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:51:16.045447    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:51:53.795710    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:51:53.795745    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:52:52.741141    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:52:52.741289    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:53:52.621642    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:53:52.621687    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:54:35.809906    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:54:35.810054    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:55:24.249298    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:55:24.249418    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:56:12.717328    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:56:12.717372    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:56:51.172375    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:56:51.172439    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:57:42.242226    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:57:42.242292    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:58:17.663810    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:58:17.663849    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 09:59:13.319754    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 09:59:13.319889    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:00:03.188117    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:00:03.188166    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:00:54.590362    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:00:54.590494    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:01:35.673592    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:01:35.673633    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:02:11.552079    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:02:11.552133    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:02:51.110525    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:02:51.110663    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:03:31.251376    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:03:31.251494    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:04:21.566895    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:04:21.566931    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:04:52.754047    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:04:52.754221    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:05:24.673675    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:05:24.673724    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:06:17.608482    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:06:17.608598    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:06:58.215116    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:06:58.215262    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:07:46.578262    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:07:46.578392    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:08:18.239710    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:08:18.239830    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:09:06.947178    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:09:06.947239    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:10:00.261401    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:10:00.261486    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:10:59.363041    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:10:59.363113    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:11:32.205551    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:11:32.205612    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:12:24.956052    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:12:24.956147    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:12:55.353860    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:12:55.354004    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:13:39.223095    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:13:39.223170    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:14:25.018278    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:14:25.018404    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 W0416 10:15:17.227351    1589 reflector.go:535] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 E0416 10:15:17.227424    1589 reflector.go:147] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusterversions?fieldSelector=metadata.name%3Dversion&limit=500&resourceVersion=0": dial tcp 10.244.64.4:6443: connect: connection refused
 level=error msg=Attempted to gather ClusterOperator status after wait failure: listing ClusterOperator objects: Get "https://api.agent-sno.abi-ci.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 10.244.64.4:6443: connect: connection refused
 level=error msg=Cluster initialization failed because one or more operators are not functioning properly.
 level=error msg=The cluster should be accessible for troubleshooting as detailed in the documentation linked below,
 level=error msg=https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html
 level=error msg=The 'wait-for install-complete' subcommand can then be used to continue the installation
 level=error msg=failed to initialize the cluster: timed out waiting for the condition
 {"component":"entrypoint","error":"wrapped process failed: exit status 6","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-16T10:15:51Z"}
 error: failed to execute wrapped command: exit status 6 {code}
Status: CLOSED
#OCPBUGS-31763issue10 days agogcp install cluster creation fails after 30-40 minutes New
Issue 15921939: gcp install cluster creation fails after 30-40 minutes
Description: Component Readiness has found a potential regression in install should succeed: overall.  I see this on various different platforms, but I started digging into GCP failures.  No installer log bundle is created, which seriously hinders my ability to dig further.
 
 Bootstrap succeeds, and then 30 minutes after waiting for cluster creation, it dies.
 
 From [https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-e2e-gcp-sdn-serial/1775871000018161664]
 
 search.ci tells me this affects nearly 10% of jobs on GCP:
 
 [https://search.dptools.openshift.org/?search=Attempted+to+gather+ClusterOperator+status+after+installation+failure%3A+listing+ClusterOperator+objects.*connection+refused&maxAge=168h&context=1&type=bug%2Bissue%2Bjunit&name=.*4.16.*gcp.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job]
 
  
 {code:java}
 time="2024-04-04T13:27:50Z" level=info msg="Waiting up to 40m0s (until 2:07PM UTC) for the cluster at https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443 to initialize..."
 time="2024-04-04T14:07:50Z" level=error msg="Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get \"https://api.ci-op-n3pv5pn3-4e5f3.XXXXXXXXXXXXXXXXXXXXXX:6443/apis/config.openshift.io/v1/clusteroperators\": dial tcp 35.238.130.20:6443: connect: connection refused"
 time="2024-04-04T14:07:50Z" level=error msg="Cluster initialization failed because one or more operators are not functioning properly.\nThe cluster should be accessible for troubleshooting as detailed in the documentation linked below,\nhttps://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html\nThe 'wait-for install-complete' subcommand can then be used to continue the installation"
 time="2024-04-04T14:07:50Z" level=error msg="failed to initialize the cluster: timed out waiting for the condition" {code}
  
 
 Probability of significant regression: 99.44%
 
 Sample (being evaluated) Release: 4.16
 Start Time: 2024-03-29T00:00:00Z
 End Time: 2024-04-04T23:59:59Z
 Success Rate: 68.75%
 Successes: 11
 Failures: 5
 Flakes: 0
 
 Base (historical) Release: 4.15
 Start Time: 2024-02-01T00:00:00Z
 End Time: 2024-02-28T23:59:59Z
 Success Rate: 96.30%
 Successes: 52
 Failures: 2
 Flakes: 0
 
 View the test details report at [https://sippy.dptools.openshift.org/sippy-ng/component_readiness/test_details?arch=amd64&arch=amd64&baseEndTime=2024-02-28%2023%3A59%3A59&baseRelease=4.15&baseStartTime=2024-02-01%2000%3A00%3A00&capability=Other&component=Installer%20%2F%20openshift-installer&confidence=95&environment=sdn%20upgrade-micro%20amd64%20gcp%20standard&excludeArches=arm64%2Cheterogeneous%2Cppc64le%2Cs390x&excludeClouds=openstack%2Cibmcloud%2Clibvirt%2Covirt%2Cunknown&excludeVariants=hypershift%2Cosd%2Cmicroshift%2Ctechpreview%2Csingle-node%2Cassisted%2Ccompact&groupBy=cloud%2Carch%2Cnetwork&ignoreDisruption=true&ignoreMissing=false&minFail=3&network=sdn&network=sdn&pity=5&platform=gcp&platform=gcp&sampleEndTime=2024-04-04%2023%3A59%3A59&sampleRelease=4.16&sampleStartTime=2024-03-29%2000%3A00%3A00&testId=cluster%20install%3A0cb1bb27e418491b1ffdacab58c5c8c0&testName=install%20should%20succeed%3A%20overall&upgrade=upgrade-micro&upgrade=upgrade-micro&variant=standard&variant=standard]
Status: New
#OCPBUGS-17183issue2 days ago[BUG] Assisted installer fails to create bond with active backup for single node installation New
Issue 15401516: [BUG] Assisted installer fails to create bond with active backup for single node installation
Description: Description of problem:
 {code:none}
 The assisted installer will always fail to create bond with active backup using nmstate yaml and the errors are : 
 
 ~~~ 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_URL's https endpoint at https://xx.xx.32.40:6443/version
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Checking validity of <hostname> of type API_INT_URL 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Successfully resolved API_INT_URL <hostname> 
 Jul 26 07:11:47 <hostname> bootkube.sh[8366]: Unable to reach API_INT_URL's https endpoint at https://xx.xx.32.40:6443/versionJul 26 07:12:23 <hostname> bootkube.sh[12960]: Still waiting for the Kubernetes API: 
 Get "https://localhost:6443/readyz": dial tcp [::1]:6443: connect: connection refusedJul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? 
 Jul 26 07:15:15 <hostname> bootkube.sh[15706]: The connection to the server <hostname>:6443 was refused - did you specify the right host or port? 
  ~~~ 
 
 Where, <hostname> is the actual hostname of the node. 
 
 Adding sosreport and nmstate yaml file here : https://drive.google.com/drive/u/0/folders/19dNzKUPIMmnUls2pT_stuJxr2Dxdi5eb{code}
 Version-Release number of selected component (if applicable):
 {code:none}
 4.12 
 Dell 16g Poweredge R660{code}
 How reproducible:
 {code:none}
 Always at customer side{code}
 Steps to Reproduce:
 {code:none}
 1. Open Assisted installer UI (console.redhat.com -> assisted installer) 
 2. Add the network configs as below for host1  
 
 -----------
 interfaces:
 - name: bond99
   type: bond
   state: up
   ipv4:
     address:
     - ip: xx.xx.32.40
       prefix-length: 24
     enabled: true
   link-aggregation:
     mode: active-backup
     options:
       miimon: '140'
     port:
     - eno12399
     - eno12409
 dns-resolver:
   config:
     search:
     - xxxx
     server:
     - xx.xx.xx.xx
 routes:
   config:
     - destination: 0.0.0.0/0
       metric: 150
       next-hop-address: xx.xx.xx.xx
       next-hop-interface: bond99
       table-id: 254    
 -----------
 
 3. Enter the mac addresses of interfaces in the fields. 
 4. Generate the iso and boot the node. The node will not be able to ping/ssh. This happen everytime and reproducible.
 5. As there was no way to check (due to ssh not working) what is happening on the node, we reset root password and can see that ip address was present on bond, still ping/ssh does not work.
 6. After multiple reboots, customer was able to ssh/ping and provided sosreport and we could see above mentioned error in the journal logs in sosreport.  
  {code}
 Actual results:
 {code:none}
 Fails to install. Seems there is some issue with networking.{code}
 Expected results:
 {code:none}
 Able to proceed with installation without above mentioned issues{code}
 Additional info:
 {code:none}
 - The installation works with round robbin bond mode in 4.12. 
 - Also, the installation works with active-backup 4.10. 
 - Active-backup bond with 4.12 is failing.{code}
Status: New
#OCPBUGS-32091issue4 weeks agoCAPI-Installer leaks processes during unsuccessful installs MODIFIED
ERROR Attempted to gather debug logs after installation failure: failed to create SSH client: ssh: handshake failed: ssh: disconnect, reason 2: Too many authentication failures
ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 3.134.9.157:6443: connect: connection refused
ERROR Bootstrap failed to complete: Get "https://api.gpei-0515.qe.devcluster.openshift.com:6443/version": dial tcp 18.222.8.23:6443: connect: connection refused

... 1 lines not shown

pull-ci-openshift-origin-master-e2e-aws-ovn-upgrade (all) - 115 runs, 30% failed, 238% of failures match = 70% impact
#1791546975200481280junit26 hours ago
I0517 21:52:48.707462       1 observer_polling.go:159] Starting file observer
W0517 21:52:48.723760       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-16-201.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0517 21:52:48.724031       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691

... 3 lines not shown

#1791523454764191744junit28 hours ago
I0517 18:18:49.427828       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0517 18:23:12.266597       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-6z9jfpnv-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.74.138:6443: connect: connection refused
I0517 18:23:26.174388       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1791523454764191744junit28 hours ago
I0517 20:12:51.921461       1 observer_polling.go:159] Starting file observer
W0517 20:12:51.931292       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-53.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0517 20:12:51.931436       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691
#1791454243794718720junit32 hours ago
I0517 15:04:20.325192       1 observer_polling.go:159] Starting file observer
W0517 15:04:20.338974       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-212.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0517 15:04:20.339204       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691

... 3 lines not shown

#1791414510913851392junit35 hours ago
I0517 12:06:35.279782       1 observer_polling.go:159] Starting file observer
W0517 12:06:35.291614       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-109-22.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0517 12:06:35.291755       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691

... 3 lines not shown

#1791223818937700352junit2 days ago
1 tests failed during this blip (2024-05-16 23:27:41.467612291 +0000 UTC m=+3200.538961713 to 2024-05-16 23:27:41.467612291 +0000 UTC m=+3200.538961713): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 16 23:28:16.280 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-179.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0516 23:28:07.071797       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0516 23:28:07.072188       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715902087 cert, and key in /tmp/serving-cert-100568261/serving-signer.crt, /tmp/serving-cert-100568261/serving-signer.key\nStaticPodsDegraded: I0516 23:28:07.415474       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0516 23:28:07.417056       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-179.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0516 23:28:07.417437       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0516 23:28:07.418105       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-100568261/tls.crt::/tmp/serving-cert-100568261/tls.key"\nStaticPodsDegraded: F0516 23:28:07.670544       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready
1 tests failed during this blip (2024-05-16 23:28:16.280384554 +0000 UTC m=+3235.351734016 to 2024-05-16 23:28:16.280384554 +0000 UTC m=+3235.351734016): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1791223818937700352junit2 days ago
1 tests failed during this blip (2024-05-17 00:12:58.229260929 +0000 UTC m=+5917.300610361 to 2024-05-17 00:12:58.229260929 +0000 UTC m=+5917.300610361): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 17 00:13:38.706 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-4-179.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0517 00:13:30.351197       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0517 00:13:30.351442       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715904810 cert, and key in /tmp/serving-cert-80257661/serving-signer.crt, /tmp/serving-cert-80257661/serving-signer.key\nStaticPodsDegraded: I0517 00:13:30.504713       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0517 00:13:30.506177       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-179.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0517 00:13:30.506314       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0517 00:13:30.506921       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-80257661/tls.crt::/tmp/serving-cert-80257661/tls.key"\nStaticPodsDegraded: F0517 00:13:30.659582       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready
1 tests failed during this blip (2024-05-17 00:13:38.706693951 +0000 UTC m=+5957.778043382 to 2024-05-17 00:13:38.706693951 +0000 UTC m=+5957.778043382): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1791278649140318208junit44 hours ago
I0517 01:58:51.970784       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1715910767\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1715910767\" (2024-05-17 00:52:47 +0000 UTC to 2025-05-17 00:52:47 +0000 UTC (now=2024-05-17 01:58:51.970736256 +0000 UTC))"
E0517 02:03:26.100810       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-pg7b7bs0-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.108.192:6443: connect: connection refused
I0517 02:03:43.144008       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1791278649140318208junit44 hours ago
I0517 03:10:21.681018       1 observer_polling.go:159] Starting file observer
W0517 03:10:21.695400       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-125.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0517 03:10:21.695525       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691
#1791097672245972992junit2 days ago
I0516 14:03:56.703371       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0516 14:03:58.691922       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-fks597cw-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.20.45:6443: connect: connection refused
E0516 14:07:11.465998       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-fks597cw-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.74.142:6443: connect: connection refused

... 1 lines not shown

#1791092789144981504junit2 days ago
I0516 15:33:10.828331       1 observer_polling.go:159] Starting file observer
W0516 15:33:10.841515       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-41.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0516 15:33:10.841635       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691

... 3 lines not shown

#1791050843114442752junit2 days ago
namespace/openshift-cloud-controller-manager node/ip-10-0-108-51.us-west-1.compute.internal pod/aws-cloud-controller-manager-57586cdfb5-hdkh8 uid/55e927e1-fc82-402c-a69a-d34973228169 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit ://api-int.ci-op-1xwtwzin-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.106.81:6443: connect: connection refused
E0516 10:55:18.206770       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-1xwtwzin-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.45.83:6443: connect: connection refused

... 1 lines not shown

#1791067439434305536junit2 days ago
E0516 12:09:07.636918       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-flcbxnmf-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0516 12:10:09.396904       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-flcbxnmf-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.63.66:6443: connect: connection refused
I0516 12:11:04.551215       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206

... 2 lines not shown

#1790850341533650944junit3 days ago
I0515 23:31:32.029122       1 observer_polling.go:159] Starting file observer
W0515 23:31:32.045913       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-23-198.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0515 23:31:32.046046       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691

... 3 lines not shown

#1790784072000212992junit3 days ago
namespace/openshift-cloud-controller-manager node/ip-10-0-42-98.us-west-1.compute.internal pod/aws-cloud-controller-manager-6df74c5669-xbh4d uid/fee615d8-0308-4bb3-9a11-145615d7d6bc container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit /namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=19628": dial tcp 10.0.3.71:6443: connect: connection refused
E0515 17:14:31.676372       1 reflector.go:147] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.ci-op-6t1vt5q1-28ab6.aws-2.ci.openshift.org:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=19628": dial tcp 10.0.3.71:6443: connect: connection refused

... 1 lines not shown

#1791048662801977344junit2 days ago
1 tests failed during this blip (2024-05-16 12:50:36.10704935 +0000 UTC m=+6711.940799732 to 2024-05-16 12:50:36.10704935 +0000 UTC m=+6711.940799732): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 16 12:50:47.411 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-56-182.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0516 12:50:38.301706       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0516 12:50:38.301984       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715863838 cert, and key in /tmp/serving-cert-1473243683/serving-signer.crt, /tmp/serving-cert-1473243683/serving-signer.key\nStaticPodsDegraded: I0516 12:50:38.767195       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0516 12:50:38.768609       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-56-182.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0516 12:50:38.768753       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0516 12:50:38.769330       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1473243683/tls.crt::/tmp/serving-cert-1473243683/tls.key"\nStaticPodsDegraded: F0516 12:50:39.053230       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-16 12:50:47.411331715 +0000 UTC m=+6723.245082096 to 2024-05-16 12:50:47.411331715 +0000 UTC m=+6723.245082096): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1791048662801977344junit2 days ago
I0516 12:45:16.901923       1 observer_polling.go:159] Starting file observer
W0516 12:45:16.919504       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-67.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0516 12:45:16.919634       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691
#1790946215504908288junit2 days ago
namespace/openshift-cloud-controller-manager node/ip-10-0-97-131.ec2.internal pod/aws-cloud-controller-manager-794fbd7bb8-kg5qn uid/122c11f5-172d-4ced-826e-d793e534cf44 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit er-manager/cloud-controller-manager: Get "https://api-int.ci-op-flh28zjy-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.97.130:6443: connect: connection refused
I0516 03:59:32.347394       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1790946215504908288junit2 days ago
I0516 04:05:28.048685       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0516 04:09:11.197025       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-flh28zjy-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.3.32:6443: connect: connection refused
I0516 04:09:37.136845       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1790743771260915712junit3 days ago
May 15 15:43:03.523 - 16s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-16-21.us-west-2.compute.internal" not ready since 2024-05-15 15:42:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 15:43:20.034 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-16-21.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 15:43:12.641023       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 15:43:12.641257       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715787792 cert, and key in /tmp/serving-cert-134104910/serving-signer.crt, /tmp/serving-cert-134104910/serving-signer.key\nStaticPodsDegraded: I0515 15:43:12.953888       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 15:43:12.955272       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-16-21.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 15:43:12.955382       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0515 15:43:12.955983       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-134104910/tls.crt::/tmp/serving-cert-134104910/tls.key"\nStaticPodsDegraded: F0515 15:43:13.110618       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 15 15:48:20.523 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-33-50.us-west-2.compute.internal" not ready since 2024-05-15 15:46:20 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790743771260915712junit3 days ago
May 15 16:40:41.490 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-16-21.us-west-2.compute.internal" not ready since 2024-05-15 16:38:41 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 16:41:10.086 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-16-21.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 16:41:02.657299       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 16:41:02.657499       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715791262 cert, and key in /tmp/serving-cert-3839608270/serving-signer.crt, /tmp/serving-cert-3839608270/serving-signer.key\nStaticPodsDegraded: I0515 16:41:02.896177       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 16:41:02.897283       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-16-21.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 16:41:02.897410       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0515 16:41:02.898025       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3839608270/tls.crt::/tmp/serving-cert-3839608270/tls.key"\nStaticPodsDegraded: F0515 16:41:03.275644       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 15 16:45:55.441 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-88-91.us-west-2.compute.internal" not ready since 2024-05-15 16:43:55 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790737123624620032junit3 days ago
May 15 15:19:35.215 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-109-25.us-west-1.compute.internal" not ready since 2024-05-15 15:19:16 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 15:19:50.565 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-109-25.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 15:19:42.430244       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 15:19:42.430455       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715786382 cert, and key in /tmp/serving-cert-512147527/serving-signer.crt, /tmp/serving-cert-512147527/serving-signer.key\nStaticPodsDegraded: I0515 15:19:42.730290       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 15:19:42.732096       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-109-25.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 15:19:42.732462       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0515 15:19:42.733061       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-512147527/tls.crt::/tmp/serving-cert-512147527/tls.key"\nStaticPodsDegraded: F0515 15:19:43.063612       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 16:04:16.792 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-39-104.us-west-1.compute.internal" not ready since 2024-05-15 16:02:16 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790737123624620032junit3 days ago
May 15 16:15:30.065 - 19s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-120-193.us-west-1.compute.internal" not ready since 2024-05-15 16:15:16 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 16:15:49.373 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-120-193.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 16:15:41.071170       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 16:15:41.071545       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715789741 cert, and key in /tmp/serving-cert-936074423/serving-signer.crt, /tmp/serving-cert-936074423/serving-signer.key\nStaticPodsDegraded: I0515 16:15:41.230500       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 16:15:41.231898       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-120-193.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 16:15:41.232006       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0515 16:15:41.232601       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-936074423/tls.crt::/tmp/serving-cert-936074423/tls.key"\nStaticPodsDegraded: F0515 16:15:41.330075       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1790706134407974912junit3 days ago
May 15 13:17:46.397 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-23-59.us-east-2.compute.internal" not ready since 2024-05-15 13:17:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 13:17:55.958 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-23-59.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 13:17:47.776895       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 13:17:47.777151       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715779067 cert, and key in /tmp/serving-cert-1237663456/serving-signer.crt, /tmp/serving-cert-1237663456/serving-signer.key\nStaticPodsDegraded: I0515 13:17:48.095878       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 13:17:48.098185       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-23-59.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 13:17:48.098418       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0515 13:17:48.099046       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1237663456/tls.crt::/tmp/serving-cert-1237663456/tls.key"\nStaticPodsDegraded: F0515 13:17:48.747466       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 15 13:23:07.285 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-125-29.us-east-2.compute.internal" not ready since 2024-05-15 13:22:47 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790718592694620160junit3 days ago
May 15 13:51:40.644 - 8s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-24-0.us-east-2.compute.internal" not ready since 2024-05-15 13:51:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 15 13:51:49.308 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-24-0.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0515 13:51:39.383244       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0515 13:51:39.383597       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715781099 cert, and key in /tmp/serving-cert-3694787073/serving-signer.crt, /tmp/serving-cert-3694787073/serving-signer.key\nStaticPodsDegraded: I0515 13:51:40.256978       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0515 13:51:40.272496       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-24-0.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0515 13:51:40.272588       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0515 13:51:40.302819       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3694787073/tls.crt::/tmp/serving-cert-3694787073/tls.key"\nStaticPodsDegraded: F0515 13:51:40.574940       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 15 13:56:47.047 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-64-165.us-east-2.compute.internal" not ready since 2024-05-15 13:54:47 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1790757018252873728junit3 days ago
cause/Error code/2 reason/ContainerExit -client@1715786302\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1715786302\" (2024-05-15 14:18:21 +0000 UTC to 2025-05-15 14:18:21 +0000 UTC (now=2024-05-15 15:23:37.664967386 +0000 UTC))"
E0515 15:26:34.743812       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-6djc2b1s-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.64.88:6443: connect: connection refused
I0515 15:27:15.636593       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1790757018252873728junit3 days ago
I0515 15:27:57.061478       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0515 15:33:44.518608       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-6djc2b1s-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.64.88:6443: connect: connection refused
I0515 15:34:02.932651       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1790692036492398592junit3 days ago
I0515 11:30:19.269163       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0515 11:34:16.995997       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-7iq6475p-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.6.112:6443: connect: connection refused
E0515 11:34:32.083312       1 reflector.go:147] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)
#1790692036492398592junit3 days ago
I0515 12:47:33.974994       1 observer_polling.go:159] Starting file observer
W0515 12:47:33.996896       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-47.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0515 12:47:33.997030       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691
#1790473777457401856junit4 days ago
May 14 21:41:02.903 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-19-8.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-19-8.us-west-1.compute.internal_openshift-kube-apiserver(ba1f43fdb56431b187357fcf20e07a0e) (exception: Degraded=False is the happy case)
May 14 21:46:21.236 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-54-41.us-west-1.compute.internal" not ready since 2024-05-14 21:45:54 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-54-41.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 21:46:18.436330       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 21:46:18.436700       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715723178 cert, and key in /tmp/serving-cert-4260006978/serving-signer.crt, /tmp/serving-cert-4260006978/serving-signer.key\nStaticPodsDegraded: I0514 21:46:19.051674       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 21:46:19.063837       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-54-41.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 21:46:19.063964       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0514 21:46:19.093194       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4260006978/tls.crt::/tmp/serving-cert-4260006978/tls.key"\nStaticPodsDegraded: F0514 21:46:19.335554       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 21:46:21.236 - 7s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady::StaticPods_Error status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-54-41.us-west-1.compute.internal" not ready since 2024-05-14 21:45:54 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-54-41.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 21:46:18.436330       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 21:46:18.436700       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715723178 cert, and key in /tmp/serving-cert-4260006978/serving-signer.crt, /tmp/serving-cert-4260006978/serving-signer.key\nStaticPodsDegraded: I0514 21:46:19.051674       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 21:46:19.063837       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-54-41.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 21:46:19.063964       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0514 21:46:19.093194       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4260006978/tls.crt::/tmp/serving-cert-4260006978/tls.key"\nStaticPodsDegraded: F0514 21:46:19.335554       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: We are not worried about Degraded=True blips for update tests yet.)

... 1 lines not shown

#1790641005670699008junit3 days ago
I0515 07:55:06.531368       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0515 07:55:09.724136       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-zrf4qc8l-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.28.199:6443: connect: connection refused
E0515 08:00:45.557981       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-zrf4qc8l-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.28.199:6443: connect: connection refused

... 1 lines not shown

#1790648685672009728junit3 days ago
I0515 10:23:29.948886       1 observer_polling.go:159] Starting file observer
W0515 10:23:29.975223       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-4-67.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0515 10:23:29.975439       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691

... 3 lines not shown

#1790630445315002368junit3 days ago
I0515 07:12:53.829269       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0515 07:15:48.853678       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-96l4nqx6-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.99.68:6443: connect: connection refused
I0515 07:15:58.329299       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1790630445315002368junit3 days ago
I0515 08:35:51.013603       1 observer_polling.go:159] Starting file observer
W0515 08:35:51.034363       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-78.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0515 08:35:51.034479       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691
#1790454137184325632junit4 days ago
May 14 21:24:44.734 - 25s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-103-116.us-west-2.compute.internal" not ready since 2024-05-14 21:22:44 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 21:25:10.599 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-103-116.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 21:25:02.446734       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 21:25:02.446933       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715721902 cert, and key in /tmp/serving-cert-537406515/serving-signer.crt, /tmp/serving-cert-537406515/serving-signer.key\nStaticPodsDegraded: I0514 21:25:02.951805       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 21:25:02.953933       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-116.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 21:25:02.954047       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0514 21:25:02.954618       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-537406515/tls.crt::/tmp/serving-cert-537406515/tls.key"\nStaticPodsDegraded: F0514 21:25:03.048246       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 14 21:30:19.756 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-37-165.us-west-2.compute.internal" not ready since 2024-05-14 21:30:01 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790454137184325632junit4 days ago
I0514 21:25:00.824182       1 observer_polling.go:159] Starting file observer
W0514 21:25:00.834923       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-116.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0514 21:25:00.835051       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691
#1790436081078898688junit4 days ago
I0514 20:21:12.843353       1 observer_polling.go:159] Starting file observer
W0514 20:21:12.863548       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-231.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0514 20:21:12.863685       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691

... 3 lines not shown

#1790397642082095104junit4 days ago
I0514 17:42:13.007818       1 observer_polling.go:159] Starting file observer
W0514 17:42:13.027267       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-127-206.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0514 17:42:13.027380       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691

... 3 lines not shown

#1790433895280283648junit4 days ago
May 14 19:03:13.462 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-40-56.us-west-2.compute.internal" not ready since 2024-05-14 19:01:13 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 19:03:45.264 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-40-56.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 19:03:37.029471       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 19:03:37.036180       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715713417 cert, and key in /tmp/serving-cert-3614892769/serving-signer.crt, /tmp/serving-cert-3614892769/serving-signer.key\nStaticPodsDegraded: I0514 19:03:37.209050       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 19:03:37.210386       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-40-56.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 19:03:37.210493       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0514 19:03:37.211125       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3614892769/tls.crt::/tmp/serving-cert-3614892769/tls.key"\nStaticPodsDegraded: F0514 19:03:37.345261       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 14 19:09:03.468 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-211.us-west-2.compute.internal" not ready since 2024-05-14 19:08:57 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790433895280283648junit4 days ago
May 14 20:06:36.215 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-40-56.us-west-2.compute.internal" not ready since 2024-05-14 20:06:18 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 20:06:51.005 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-40-56.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 20:06:42.761377       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 20:06:42.761607       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715717202 cert, and key in /tmp/serving-cert-2663163567/serving-signer.crt, /tmp/serving-cert-2663163567/serving-signer.key\nStaticPodsDegraded: I0514 20:06:43.068700       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 20:06:43.070281       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-40-56.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 20:06:43.070396       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0514 20:06:43.071070       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2663163567/tls.crt::/tmp/serving-cert-2663163567/tls.key"\nStaticPodsDegraded: F0514 20:06:43.309145       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 14 20:11:59.855 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-118-211.us-west-2.compute.internal" not ready since 2024-05-14 20:11:47 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790331722311667712junit4 days ago
May 14 12:25:21.563 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-46-79.ec2.internal" not ready since 2024-05-14 12:25:02 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 12:25:34.280 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-46-79.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 12:25:26.564139       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 12:25:26.564352       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715689526 cert, and key in /tmp/serving-cert-1988033862/serving-signer.crt, /tmp/serving-cert-1988033862/serving-signer.key\nStaticPodsDegraded: I0514 12:25:26.739605       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 12:25:26.740969       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-46-79.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 12:25:26.741105       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0514 12:25:26.741775       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1988033862/tls.crt::/tmp/serving-cert-1988033862/tls.key"\nStaticPodsDegraded: F0514 12:25:27.153161       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 12:30:43.557 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-11-97.ec2.internal" not ready since 2024-05-14 12:30:41 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790331722311667712junit4 days ago
May 14 13:33:39.387 - 10s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-103-50.ec2.internal" not ready since 2024-05-14 13:33:28 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 13:33:50.099 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-103-50.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 13:33:41.899323       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 13:33:41.899742       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715693621 cert, and key in /tmp/serving-cert-3383120518/serving-signer.crt, /tmp/serving-cert-3383120518/serving-signer.key\nStaticPodsDegraded: I0514 13:33:42.122845       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 13:33:42.124517       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-50.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 13:33:42.124678       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0514 13:33:42.125407       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3383120518/tls.crt::/tmp/serving-cert-3383120518/tls.key"\nStaticPodsDegraded: F0514 13:33:42.294359       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 13:39:00.770 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-46-79.ec2.internal" not ready since 2024-05-14 13:38:38 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790397367573286912junit4 days ago
I0514 17:39:46.400082       1 observer_polling.go:159] Starting file observer
W0514 17:39:46.412629       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-217.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0514 17:39:46.412794       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691

... 3 lines not shown

#1790393462445576192junit4 days ago
May 14 16:30:58.381 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-126-211.us-west-2.compute.internal" not ready since 2024-05-14 16:30:57 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 14 16:31:29.823 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-126-211.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0514 16:31:22.219108       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0514 16:31:22.219368       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715704282 cert, and key in /tmp/serving-cert-3211966199/serving-signer.crt, /tmp/serving-cert-3211966199/serving-signer.key\nStaticPodsDegraded: I0514 16:31:22.464312       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0514 16:31:22.466369       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-126-211.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0514 16:31:22.466497       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1982-g313bc06-313bc0691\nStaticPodsDegraded: I0514 16:31:22.467321       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3211966199/tls.crt::/tmp/serving-cert-3211966199/tls.key"\nStaticPodsDegraded: F0514 16:31:22.594546       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 14 16:36:38.136 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-33-228.us-west-2.compute.internal" not ready since 2024-05-14 16:36:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790393462445576192junit4 days ago
I0514 15:23:25.170647       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
E0514 15:23:25.846014       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-0km0pi2f-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.49.173:6443: connect: connection refused
I0514 15:23:29.216948       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1790028865758826496junit5 days ago
May 13 16:25:59.545 - 29s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-122-210.us-west-1.compute.internal" not ready since 2024-05-13 16:25:53 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 13 16:26:29.366 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-122-210.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 16:26:20.005638       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 16:26:20.005873       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715617580 cert, and key in /tmp/serving-cert-1292471303/serving-signer.crt, /tmp/serving-cert-1292471303/serving-signer.key\nStaticPodsDegraded: I0513 16:26:20.371897       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 16:26:20.373777       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-122-210.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 16:26:20.373966       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290\nStaticPodsDegraded: I0513 16:26:20.374800       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1292471303/tls.crt::/tmp/serving-cert-1292471303/tls.key"\nStaticPodsDegraded: F0513 16:26:20.680838       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 13 16:31:38.278 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-201.us-west-1.compute.internal" not ready since 2024-05-13 16:31:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1790028865758826496junit5 days ago
May 13 17:15:25.265 - 30s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-12-201.us-west-1.compute.internal" not ready since 2024-05-13 17:13:25 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 13 17:15:55.887 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-12-201.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 17:15:46.403043       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 17:15:46.403705       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715620546 cert, and key in /tmp/serving-cert-2304256710/serving-signer.crt, /tmp/serving-cert-2304256710/serving-signer.key\nStaticPodsDegraded: I0513 17:15:47.008126       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 17:15:47.022783       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-12-201.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 17:15:47.022941       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290\nStaticPodsDegraded: I0513 17:15:47.045889       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2304256710/tls.crt::/tmp/serving-cert-2304256710/tls.key"\nStaticPodsDegraded: F0513 17:15:47.395767       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 13 17:21:01.273 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-122-210.us-west-1.compute.internal" not ready since 2024-05-13 17:20:53 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1789785831028822016junit5 days ago
1 tests failed during this blip (2024-05-13 00:06:40.411325343 +0000 UTC m=+2718.350132965 to 2024-05-13 00:06:40.411325343 +0000 UTC m=+2718.350132965): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 13 00:07:09.221 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-33-250.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 00:07:01.671009       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 00:07:01.671272       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715558821 cert, and key in /tmp/serving-cert-1416261141/serving-signer.crt, /tmp/serving-cert-1416261141/serving-signer.key\nStaticPodsDegraded: I0513 00:07:01.882212       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 00:07:01.883650       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-33-250.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 00:07:01.883764       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290\nStaticPodsDegraded: I0513 00:07:01.884333       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1416261141/tls.crt::/tmp/serving-cert-1416261141/tls.key"\nStaticPodsDegraded: F0513 00:07:02.074006       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-13 00:07:09.221131167 +0000 UTC m=+2747.159938789 to 2024-05-13 00:07:09.221131167 +0000 UTC m=+2747.159938789): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1789785831028822016junit5 days ago
1 tests failed during this blip (2024-05-13 00:17:56.094317274 +0000 UTC m=+3394.033124906 to 2024-05-13 00:17:56.094317274 +0000 UTC m=+3394.033124906): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 13 00:18:08.120 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-64-145.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 00:18:00.383398       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 00:18:00.383726       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715559480 cert, and key in /tmp/serving-cert-2289073297/serving-signer.crt, /tmp/serving-cert-2289073297/serving-signer.key\nStaticPodsDegraded: I0513 00:18:00.841522       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 00:18:00.843394       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-64-145.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 00:18:00.843523       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290\nStaticPodsDegraded: I0513 00:18:00.844219       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2289073297/tls.crt::/tmp/serving-cert-2289073297/tls.key"\nStaticPodsDegraded: F0513 00:18:01.028403       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-13 00:18:08.120480594 +0000 UTC m=+3406.059288225 to 2024-05-13 00:18:08.120480594 +0000 UTC m=+3406.059288225): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1790013052351942656junit5 days ago
1 tests failed during this blip (2024-05-13 15:09:16.198949371 +0000 UTC m=+2664.784705336 to 2024-05-13 15:09:16.198949371 +0000 UTC m=+2664.784705336): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 13 15:09:48.705 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-80-56.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 15:09:40.101323       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 15:09:40.101572       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715612980 cert, and key in /tmp/serving-cert-1220944430/serving-signer.crt, /tmp/serving-cert-1220944430/serving-signer.key\nStaticPodsDegraded: I0513 15:09:40.487463       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 15:09:40.488949       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-80-56.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 15:09:40.489197       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290\nStaticPodsDegraded: I0513 15:09:40.489889       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1220944430/tls.crt::/tmp/serving-cert-1220944430/tls.key"\nStaticPodsDegraded: F0513 15:09:40.712746       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-13 15:09:48.705369175 +0000 UTC m=+2697.291125130 to 2024-05-13 15:09:48.705369175 +0000 UTC m=+2697.291125130): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1790013052351942656junit5 days ago
1 tests failed during this blip (2024-05-13 15:20:03.430885582 +0000 UTC m=+3312.016641517 to 2024-05-13 15:20:03.430885582 +0000 UTC m=+3312.016641517): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 13 15:20:34.077 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-1-129.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 15:20:26.585188       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 15:20:26.585437       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715613626 cert, and key in /tmp/serving-cert-996149310/serving-signer.crt, /tmp/serving-cert-996149310/serving-signer.key\nStaticPodsDegraded: I0513 15:20:26.845488       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 15:20:26.851955       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-1-129.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 15:20:26.852360       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290\nStaticPodsDegraded: I0513 15:20:26.853250       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-996149310/tls.crt::/tmp/serving-cert-996149310/tls.key"\nStaticPodsDegraded: F0513 15:20:27.115109       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-13 15:20:34.07766087 +0000 UTC m=+3342.663416814 to 2024-05-13 15:20:34.07766087 +0000 UTC m=+3342.663416814): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1789987072111546368junit5 days ago
May 13 13:28:06.348 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-96-4.us-west-1.compute.internal" not ready since 2024-05-13 13:26:06 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 13 13:28:41.679 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-96-4.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0513 13:28:33.795878       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0513 13:28:33.796206       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715606913 cert, and key in /tmp/serving-cert-2002050171/serving-signer.crt, /tmp/serving-cert-2002050171/serving-signer.key\nStaticPodsDegraded: I0513 13:28:34.029347       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0513 13:28:34.031232       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-96-4.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0513 13:28:34.031564       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290\nStaticPodsDegraded: I0513 13:28:34.032421       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2002050171/tls.crt::/tmp/serving-cert-2002050171/tls.key"\nStaticPodsDegraded: F0513 13:28:34.366298       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 13 13:33:34.488 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-127-95.us-west-1.compute.internal" not ready since 2024-05-13 13:31:34 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1789987072111546368junit5 days ago
I0513 14:31:03.563467       1 observer_polling.go:159] Starting file observer
W0513 14:31:03.600985       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-127-95.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0513 14:31:03.601139       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290
#1788955844939878400junit8 days ago
May 10 18:08:05.584 - 40s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-30-169.us-west-2.compute.internal" not ready since 2024-05-10 18:06:04 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 10 18:08:45.707 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-30-169.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0510 18:08:36.460512       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0510 18:08:36.460978       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715364516 cert, and key in /tmp/serving-cert-1904369010/serving-signer.crt, /tmp/serving-cert-1904369010/serving-signer.key\nStaticPodsDegraded: I0510 18:08:36.859077       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0510 18:08:36.860568       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-30-169.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0510 18:08:36.860696       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290\nStaticPodsDegraded: I0510 18:08:36.861328       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1904369010/tls.crt::/tmp/serving-cert-1904369010/tls.key"\nStaticPodsDegraded: F0510 18:08:37.082208       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 10 18:13:51.594 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-116-227.us-west-2.compute.internal" not ready since 2024-05-10 18:13:30 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1788955844939878400junit8 days ago
May 10 18:19:13.048 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-51-92.us-west-2.compute.internal" not ready since 2024-05-10 18:19:03 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 10 18:19:25.781 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-51-92.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0510 18:19:18.298851       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0510 18:19:18.299093       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715365158 cert, and key in /tmp/serving-cert-646380626/serving-signer.crt, /tmp/serving-cert-646380626/serving-signer.key\nStaticPodsDegraded: I0510 18:19:18.842226       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0510 18:19:18.843832       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-51-92.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0510 18:19:18.843947       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290\nStaticPodsDegraded: I0510 18:19:18.844601       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-646380626/tls.crt::/tmp/serving-cert-646380626/tls.key"\nStaticPodsDegraded: F0510 18:19:19.226938       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1788932647947341824junit8 days ago
I0510 16:29:09.360477       1 observer_polling.go:159] Starting file observer
W0510 16:29:09.375998       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-13-36.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0510 16:29:09.376126       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290

... 3 lines not shown

#1788692902830936064junit8 days ago
I0510 00:51:37.260629       1 observer_polling.go:159] Starting file observer
W0510 00:51:37.273569       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-6-148.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0510 00:51:37.273835       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290

... 3 lines not shown

#1788813106726572032junit8 days ago
May 10 07:59:27.061 - 11s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-104-129.us-west-2.compute.internal" not ready since 2024-05-10 07:59:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 10 07:59:38.445 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-104-129.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0510 07:59:30.837065       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0510 07:59:30.837333       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715327970 cert, and key in /tmp/serving-cert-2888765895/serving-signer.crt, /tmp/serving-cert-2888765895/serving-signer.key\nStaticPodsDegraded: I0510 07:59:31.143093       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0510 07:59:31.144753       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-104-129.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0510 07:59:31.144889       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290\nStaticPodsDegraded: I0510 07:59:31.145489       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2888765895/tls.crt::/tmp/serving-cert-2888765895/tls.key"\nStaticPodsDegraded: F0510 07:59:31.305515       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 10 08:47:00.916 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-17-41.us-west-2.compute.internal" not ready since 2024-05-10 08:45:00 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1788813106726572032junit8 days ago
namespace/openshift-cloud-controller-manager node/ip-10-0-17-41.us-west-2.compute.internal pod/aws-cloud-controller-manager-5c98dc677f-qp85f uid/6d77d0b7-e040-4670-b04f-4665a9e81808 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit //api-int.ci-op-76nld8gr-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.70.192:6443: connect: connection refused
E0510 06:49:18.343689       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-76nld8gr-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.39.133:6443: connect: connection refused

... 2 lines not shown

#1788866274336444416junit8 days ago
I0510 11:22:36.264781       1 observer_polling.go:159] Starting file observer
W0510 11:22:36.341844       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-136.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0510 11:22:36.341969       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290

... 3 lines not shown

#1788682091546808320junit9 days ago
I0510 00:10:31.061580       1 observer_polling.go:159] Starting file observer
W0510 00:10:31.086158       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-112-193.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0510 00:10:31.086386       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1979-gb352992-b35299290

... 3 lines not shown

#1788629815364947968junit9 days ago
May 09 19:52:29.255 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-111-137.us-west-2.compute.internal" not ready since 2024-05-09 19:52:15 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 09 19:52:38.833 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-111-137.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0509 19:52:31.009097       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0509 19:52:31.009333       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715284351 cert, and key in /tmp/serving-cert-3672462214/serving-signer.crt, /tmp/serving-cert-3672462214/serving-signer.key\nStaticPodsDegraded: I0509 19:52:31.224334       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0509 19:52:31.225802       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-111-137.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0509 19:52:31.225919       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0509 19:52:31.226484       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3672462214/tls.crt::/tmp/serving-cert-3672462214/tls.key"\nStaticPodsDegraded: F0509 19:52:31.357127       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 09 19:57:36.545 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-1-99.us-west-2.compute.internal" not ready since 2024-05-09 19:55:36 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1788629815364947968junit9 days ago
May 09 20:46:51.079 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-111-137.us-west-2.compute.internal" not ready since 2024-05-09 20:46:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 09 20:47:06.319 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-111-137.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0509 20:46:56.685624       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0509 20:46:56.685866       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715287616 cert, and key in /tmp/serving-cert-2588549011/serving-signer.crt, /tmp/serving-cert-2588549011/serving-signer.key\nStaticPodsDegraded: I0509 20:46:57.196602       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0509 20:46:57.222171       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-111-137.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0509 20:46:57.222313       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0509 20:46:57.248577       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2588549011/tls.crt::/tmp/serving-cert-2588549011/tls.key"\nStaticPodsDegraded: F0509 20:46:57.476931       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 09 20:53:50.616 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-35-75.us-west-2.compute.internal" not ready since 2024-05-09 20:53:42 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1788568181204324352junit9 days ago
namespace/openshift-cloud-controller-manager node/ip-10-0-122-227.us-west-1.compute.internal pod/aws-cloud-controller-manager-79bb7599bc-9zks7 uid/f355d726-08a0-457d-947a-6e1043f85c27 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit  Get "https://api-int.ci-op-9k53my97-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.122.25:6443: connect: connection refused
E0509 14:32:09.048935       1 reflector.go:147] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)
#1788568181204324352junit9 days ago
I0509 14:32:36.406938       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0509 14:36:17.902836       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-9k53my97-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.122.25:6443: connect: connection refused
I0509 14:36:28.539535       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1788596363815030784junit9 days ago
I0509 17:32:24.314948       1 observer_polling.go:159] Starting file observer
W0509 17:32:24.327507       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-118-143.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0509 17:32:24.327683       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1788624809081442304junit9 days ago
May 09 19:11:53.578 - 17s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-2-68.us-west-1.compute.internal" not ready since 2024-05-09 19:11:36 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 09 19:12:11.179 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-2-68.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0509 19:12:02.372295       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0509 19:12:02.372554       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715281922 cert, and key in /tmp/serving-cert-1185205066/serving-signer.crt, /tmp/serving-cert-1185205066/serving-signer.key\nStaticPodsDegraded: I0509 19:12:02.608464       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0509 19:12:02.609976       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-2-68.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0509 19:12:02.610094       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0509 19:12:02.610731       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1185205066/tls.crt::/tmp/serving-cert-1185205066/tls.key"\nStaticPodsDegraded: F0509 19:12:02.758606       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 09 19:17:22.052 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-68-53.us-west-1.compute.internal" not ready since 2024-05-09 19:17:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1788542695434620928junit9 days ago
I0509 14:44:10.157891       1 observer_polling.go:159] Starting file observer
W0509 14:44:10.175135       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-14-13.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0509 14:44:10.175267       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1788324379965263872junit10 days ago
I0508 23:23:41.086280       1 observer_polling.go:159] Starting file observer
W0508 23:23:41.095430       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-47-41.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0508 23:23:41.095598       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1788525952687411200junit9 days ago
I0509 13:50:30.734516       1 observer_polling.go:159] Starting file observer
W0509 13:50:30.748157       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-121-108.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0509 13:50:30.748316       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1788385736416825344junit9 days ago
May 09 03:26:57.614 - 20s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-62-31.us-west-1.compute.internal" not ready since 2024-05-09 03:26:44 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 09 03:27:18.075 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-62-31.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0509 03:27:09.790955       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0509 03:27:09.791184       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715225229 cert, and key in /tmp/serving-cert-3716487280/serving-signer.crt, /tmp/serving-cert-3716487280/serving-signer.key\nStaticPodsDegraded: I0509 03:27:10.090657       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0509 03:27:10.092446       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-62-31.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0509 03:27:10.092619       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0509 03:27:10.093470       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3716487280/tls.crt::/tmp/serving-cert-3716487280/tls.key"\nStaticPodsDegraded: F0509 03:27:10.437376       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 09 03:32:24.191 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-116-166.us-west-1.compute.internal" not ready since 2024-05-09 03:32:05 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1788385736416825344junit9 days ago
May 09 04:30:14.280 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-116-166.us-west-1.compute.internal" not ready since 2024-05-09 04:29:55 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 09 04:30:29.050 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-116-166.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0509 04:30:21.063524       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0509 04:30:21.064042       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715229021 cert, and key in /tmp/serving-cert-2286809801/serving-signer.crt, /tmp/serving-cert-2286809801/serving-signer.key\nStaticPodsDegraded: I0509 04:30:21.223623       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0509 04:30:21.225381       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-116-166.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0509 04:30:21.225523       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0509 04:30:21.226104       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2286809801/tls.crt::/tmp/serving-cert-2286809801/tls.key"\nStaticPodsDegraded: F0509 04:30:21.450352       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 09 04:35:34.391 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-62-31.us-west-1.compute.internal" not ready since 2024-05-09 04:35:27 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1788309435270041600junit10 days ago
I0508 22:32:13.960269       1 observer_polling.go:159] Starting file observer
W0508 22:32:13.969423       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-127-32.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0508 22:32:13.969516       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1788281825458655232junit10 days ago
I0508 21:43:25.033540       1 observer_polling.go:159] Starting file observer
W0508 21:43:25.049522       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-110-182.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0508 21:43:25.049675       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1788196405261635584junit10 days ago
I0508 15:54:29.201503       1 observer_polling.go:159] Starting file observer
W0508 15:54:29.214242       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-101-216.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0508 15:54:29.214383       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1788229477084434432junit10 days ago
I0508 17:05:07.065300       1 observer_polling.go:159] Starting file observer
W0508 17:05:07.076872       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-50-216.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0508 17:05:07.077020       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1788207451300958208junit10 days ago
May 08 16:32:01.748 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-102-129.ec2.internal" not ready since 2024-05-08 16:30:01 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 08 16:32:37.091 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-102-129.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0508 16:32:28.958889       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0508 16:32:28.959104       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715185948 cert, and key in /tmp/serving-cert-3231451827/serving-signer.crt, /tmp/serving-cert-3231451827/serving-signer.key\nStaticPodsDegraded: I0508 16:32:29.303457       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0508 16:32:29.305040       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-102-129.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0508 16:32:29.305142       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0508 16:32:29.305843       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3231451827/tls.crt::/tmp/serving-cert-3231451827/tls.key"\nStaticPodsDegraded: F0508 16:32:29.466153       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 08 16:37:48.363 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-93-183.ec2.internal" not ready since 2024-05-08 16:37:41 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1788207451300958208junit10 days ago
I0508 15:44:11.442920       1 observer_polling.go:159] Starting file observer
W0508 15:44:11.459252       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-102-129.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0508 15:44:11.459390       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1788146068584665088junit10 days ago
I0508 10:37:36.307332       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0508 10:37:38.400980       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-kyytlp45-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.43.192:6443: connect: connection refused
I0508 10:37:47.195407       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1788146068584665088junit10 days ago
I0508 12:54:19.891018       1 observer_polling.go:159] Starting file observer
W0508 12:54:19.901114       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-103-109.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0508 12:54:19.901255       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1787928077322424320junit11 days ago
May 07 21:25:55.106 - 15s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-105-39.us-west-1.compute.internal" not ready since 2024-05-07 21:25:36 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 07 21:26:10.184 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-105-39.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0507 21:26:01.741778       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0507 21:26:01.742005       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715117161 cert, and key in /tmp/serving-cert-3613008428/serving-signer.crt, /tmp/serving-cert-3613008428/serving-signer.key\nStaticPodsDegraded: I0507 21:26:02.125312       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0507 21:26:02.127456       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-105-39.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0507 21:26:02.127568       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0507 21:26:02.128122       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3613008428/tls.crt::/tmp/serving-cert-3613008428/tls.key"\nStaticPodsDegraded: F0507 21:26:02.354391       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 07 22:13:26.344 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-33-23.us-west-1.compute.internal" not ready since 2024-05-07 22:11:26 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787914395213369344junit11 days ago
May 07 20:12:14.961 - 25s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-97-51.us-east-2.compute.internal" not ready since 2024-05-07 20:10:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 07 20:12:40.090 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-97-51.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0507 20:12:33.271940       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0507 20:12:33.272189       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715112753 cert, and key in /tmp/serving-cert-367032041/serving-signer.crt, /tmp/serving-cert-367032041/serving-signer.key\nStaticPodsDegraded: I0507 20:12:33.543055       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0507 20:12:33.544378       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-97-51.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0507 20:12:33.544505       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0507 20:12:33.545059       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-367032041/tls.crt::/tmp/serving-cert-367032041/tls.key"\nStaticPodsDegraded: F0507 20:12:33.782336       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 07 20:17:42.956 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-77-40.us-east-2.compute.internal" not ready since 2024-05-07 20:15:42 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787914395213369344junit11 days ago
May 07 21:18:56.661 - 33s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-97-51.us-east-2.compute.internal" not ready since 2024-05-07 21:16:56 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 07 21:19:29.795 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-97-51.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0507 21:19:22.685332       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0507 21:19:22.685536       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715116762 cert, and key in /tmp/serving-cert-3618819996/serving-signer.crt, /tmp/serving-cert-3618819996/serving-signer.key\nStaticPodsDegraded: I0507 21:19:23.070744       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0507 21:19:23.072506       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-97-51.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0507 21:19:23.072642       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0507 21:19:23.073368       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3618819996/tls.crt::/tmp/serving-cert-3618819996/tls.key"\nStaticPodsDegraded: F0507 21:19:23.196568       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1787910445496012800junit11 days ago
1 tests failed during this blip (2024-05-07 20:16:25.534132367 +0000 UTC m=+3406.657308734 to 2024-05-07 20:16:25.534132367 +0000 UTC m=+3406.657308734): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 07 20:16:39.637 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-108-17.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0507 20:16:31.356707       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0507 20:16:31.357078       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715112991 cert, and key in /tmp/serving-cert-3892328290/serving-signer.crt, /tmp/serving-cert-3892328290/serving-signer.key\nStaticPodsDegraded: I0507 20:16:31.664076       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0507 20:16:31.665481       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-17.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0507 20:16:31.665582       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0507 20:16:31.666136       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3892328290/tls.crt::/tmp/serving-cert-3892328290/tls.key"\nStaticPodsDegraded: F0507 20:16:31.862935       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-07 20:16:39.637609884 +0000 UTC m=+3420.760786251 to 2024-05-07 20:16:39.637609884 +0000 UTC m=+3420.760786251): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1787910445496012800junit11 days ago
1 tests failed during this blip (2024-05-07 21:04:12.377316571 +0000 UTC m=+6273.500492938 to 2024-05-07 21:04:12.377316571 +0000 UTC m=+6273.500492938): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 07 21:04:47.486 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-108-17.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0507 21:04:40.978329       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0507 21:04:40.978565       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715115880 cert, and key in /tmp/serving-cert-3133346127/serving-signer.crt, /tmp/serving-cert-3133346127/serving-signer.key\nStaticPodsDegraded: I0507 21:04:41.243528       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0507 21:04:41.246137       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-17.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0507 21:04:41.246292       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0507 21:04:41.246942       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3133346127/tls.crt::/tmp/serving-cert-3133346127/tls.key"\nStaticPodsDegraded: F0507 21:04:41.365155       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-07 21:04:47.486991411 +0000 UTC m=+6308.610167779 to 2024-05-07 21:04:47.486991411 +0000 UTC m=+6308.610167779): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1787953313694617600junit11 days ago
I0508 00:00:57.852185       1 observer_polling.go:159] Starting file observer
W0508 00:00:57.868895       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-11-46.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0508 00:00:57.869040       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1788101374630694912junit10 days ago
May 08 08:32:50.223 - 14s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-8-125.us-west-1.compute.internal" not ready since 2024-05-08 08:32:42 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 08 08:33:04.740 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-8-125.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0508 08:32:56.691008       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0508 08:32:56.691590       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715157176 cert, and key in /tmp/serving-cert-13245061/serving-signer.crt, /tmp/serving-cert-13245061/serving-signer.key\nStaticPodsDegraded: I0508 08:32:56.966030       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0508 08:32:56.967514       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-8-125.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0508 08:32:56.967659       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0508 08:32:56.968294       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-13245061/tls.crt::/tmp/serving-cert-13245061/tls.key"\nStaticPodsDegraded: F0508 08:32:57.301092       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 08 08:38:02.544 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-92-37.us-west-1.compute.internal" not ready since 2024-05-08 08:36:02 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1788101374630694912junit10 days ago
May 08 08:43:52.090 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-43-103.us-west-1.compute.internal" not ready since 2024-05-08 08:43:31 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 08 08:44:04.551 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-43-103.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0508 08:43:56.555167       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0508 08:43:56.555348       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715157836 cert, and key in /tmp/serving-cert-3772949094/serving-signer.crt, /tmp/serving-cert-3772949094/serving-signer.key\nStaticPodsDegraded: I0508 08:43:57.005530       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0508 08:43:57.007883       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-43-103.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0508 08:43:57.008191       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0508 08:43:57.008869       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3772949094/tls.crt::/tmp/serving-cert-3772949094/tls.key"\nStaticPodsDegraded: F0508 08:43:57.197947       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 08 09:31:36.564 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-43-103.us-west-1.compute.internal" not ready since 2024-05-08 09:31:15 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787898954323595264junit11 days ago
E0507 18:06:55.654745       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-b79fsqll-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0507 18:12:37.400650       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-b79fsqll-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.98.102:6443: connect: connection refused
I0507 18:12:43.669859       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1787898954323595264junit11 days ago
I0507 19:18:34.698876       1 observer_polling.go:159] Starting file observer
W0507 19:18:34.718058       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-124.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0507 19:18:34.718183       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1787806662262788096junit11 days ago
I0507 13:53:03.264682       1 observer_polling.go:159] Starting file observer
W0507 13:53:03.276856       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-22-157.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0507 13:53:03.276956       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1787805309838823424junit11 days ago
I0507 13:57:40.783333       1 observer_polling.go:159] Starting file observer
W0507 13:57:40.797038       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-19-19.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0507 13:57:40.797157       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1787808362558132224junit11 days ago
May 07 14:09:59.155 - 9s    E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-20-86.us-east-2.compute.internal" not ready since 2024-05-07 14:09:48 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 07 14:10:08.913 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-20-86.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0507 14:10:02.906712       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0507 14:10:02.907070       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715091002 cert, and key in /tmp/serving-cert-2248275608/serving-signer.crt, /tmp/serving-cert-2248275608/serving-signer.key\nStaticPodsDegraded: I0507 14:10:03.231647       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0507 14:10:03.233037       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-20-86.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0507 14:10:03.233162       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0507 14:10:03.233751       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2248275608/tls.crt::/tmp/serving-cert-2248275608/tls.key"\nStaticPodsDegraded: F0507 14:10:03.536958       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 07 14:15:17.155 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-117-112.us-east-2.compute.internal" not ready since 2024-05-07 14:15:07 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787802009043210240junit11 days ago
I0507 12:50:38.089917       1 observer_polling.go:159] Starting file observer
W0507 12:50:38.113977       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-10-164.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0507 12:50:38.114200       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1787572936266223616junit12 days ago
I0506 22:38:17.742130       1 observer_polling.go:159] Starting file observer
W0506 22:38:17.752928       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-108-102.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0506 22:38:17.753095       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1787731217416720384junit11 days ago
May 07 09:00:25.284 - 35s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-21-160.us-west-2.compute.internal" not ready since 2024-05-07 08:58:25 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 07 09:01:00.500 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False StaticPodsDegraded: pod/kube-apiserver-ip-10-0-21-160.us-west-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0507 09:00:52.536979       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0507 09:00:52.537232       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715072452 cert, and key in /tmp/serving-cert-1218408380/serving-signer.crt, /tmp/serving-cert-1218408380/serving-signer.key\nStaticPodsDegraded: I0507 09:00:52.964469       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0507 09:00:52.965945       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-21-160.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0507 09:00:52.966053       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0507 09:00:52.966632       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1218408380/tls.crt::/tmp/serving-cert-1218408380/tls.key"\nStaticPodsDegraded: F0507 09:00:53.200147       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready (exception: Degraded=False is the happy case)
May 07 09:05:51.253 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-100-33.us-west-2.compute.internal" not ready since 2024-05-07 09:03:51 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787568369109569536junit12 days ago
May 06 21:23:56.878 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-30-154.ec2.internal" not ready since 2024-05-06 21:23:46 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 21:24:09.260 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-30-154.ec2.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 21:24:01.049961       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 21:24:01.050169       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715030641 cert, and key in /tmp/serving-cert-4085818066/serving-signer.crt, /tmp/serving-cert-4085818066/serving-signer.key\nStaticPodsDegraded: I0506 21:24:01.390441       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 21:24:01.391961       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-30-154.ec2.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 21:24:01.392079       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0506 21:24:01.392708       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-4085818066/tls.crt::/tmp/serving-cert-4085818066/tls.key"\nStaticPodsDegraded: F0506 21:24:01.927161       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 21:29:04.015 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-33-20.ec2.internal" not ready since 2024-05-06 21:27:03 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)

... 3 lines not shown

#1787583741888040960junit12 days ago
I0506 22:28:49.208062       1 observer_polling.go:159] Starting file observer
W0506 22:28:49.217775       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-119-56.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0506 22:28:49.217901       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1787546387181735936junit12 days ago
namespace/openshift-cloud-controller-manager node/ip-10-0-51-110.ec2.internal pod/aws-cloud-controller-manager-5f95b9d466-qffgr uid/ff2db85a-9211-42b8-9960-1f1e84cec690 container/cloud-controller-manager restarted 1 times:
cause/Error code/2 reason/ContainerExit s://api-int.ci-op-s69lkszz-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.92.138:6443: connect: connection refused
I0506 18:57:50.360719       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206

... 3 lines not shown

#1787537865891123200junit12 days ago
I0506 18:32:36.761827       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0506 18:38:12.770471       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-kwyyq3pd-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.33.85:6443: connect: connection refused
I0506 18:38:32.085123       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1787537865891123200junit12 days ago
I0506 20:39:11.507374       1 observer_polling.go:159] Starting file observer
W0506 20:39:11.521273       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-58-224.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0506 20:39:11.521406       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1787537317120970752junit12 days ago
I0506 18:19:48.990473       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0506 18:23:37.334157       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-cmpvn3ht-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.121.126:6443: connect: connection refused
I0506 18:23:53.152410       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172
#1787537317120970752junit12 days ago
I0506 19:31:15.714103       1 observer_polling.go:159] Starting file observer
W0506 19:31:15.727239       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-113-68.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0506 19:31:15.727360       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1787531371783131136junit12 days ago
I0506 18:01:17.480014       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0506 18:01:22.688527       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-jrndgk1n-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.18.14:6443: connect: connection refused
#1787531371783131136junit12 days ago
I0506 19:09:11.961168       1 observer_polling.go:159] Starting file observer
W0506 19:09:11.976089       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-111-201.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0506 19:09:11.976382       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1787523333118496768junit12 days ago
I0506 18:21:02.237801       1 observer_polling.go:159] Starting file observer
W0506 18:21:02.248970       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-16-186.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0506 18:21:02.249075       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1787523156278251520junit12 days ago
I0506 18:30:38.595825       1 observer_polling.go:159] Starting file observer
W0506 18:30:38.607691       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-17-31.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0506 18:30:38.607887       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1787518082776829952junit12 days ago
1 tests failed during this blip (2024-05-06 18:01:31.820888325 +0000 UTC m=+3062.765339478 to 2024-05-06 18:01:31.820888325 +0000 UTC m=+3062.765339478): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 18:02:02.438 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-97-39.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 18:01:54.947803       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 18:01:54.948007       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715018514 cert, and key in /tmp/serving-cert-3886659153/serving-signer.crt, /tmp/serving-cert-3886659153/serving-signer.key\nStaticPodsDegraded: I0506 18:01:55.322543       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 18:01:55.324063       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-97-39.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 18:01:55.324187       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0506 18:01:55.324775       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-3886659153/tls.crt::/tmp/serving-cert-3886659153/tls.key"\nStaticPodsDegraded: F0506 18:01:55.578213       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded:
1 tests failed during this blip (2024-05-06 18:02:02.438999921 +0000 UTC m=+3093.383451084 to 2024-05-06 18:02:02.438999921 +0000 UTC m=+3093.383451084): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] (exception: Degraded=False is the happy case)
#1787518082776829952junit12 days ago
I0506 18:07:15.243289       1 observer_polling.go:159] Starting file observer
W0506 18:07:15.255940       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-19-144.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0506 18:07:15.256040       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519
#1787464435892228096junit12 days ago
May 06 14:27:22.393 - 31s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-111-252.us-west-1.compute.internal" not ready since 2024-05-06 14:25:22 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 14:27:54.345 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-111-252.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 14:27:46.088843       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 14:27:46.089337       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715005666 cert, and key in /tmp/serving-cert-1826190593/serving-signer.crt, /tmp/serving-cert-1826190593/serving-signer.key\nStaticPodsDegraded: I0506 14:27:46.772229       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 14:27:46.790180       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-111-252.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 14:27:46.790375       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0506 14:27:46.801247       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1826190593/tls.crt::/tmp/serving-cert-1826190593/tls.key"\nStaticPodsDegraded: F0506 14:27:47.075425       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 14:33:03.819 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-21-156.us-west-1.compute.internal" not ready since 2024-05-06 14:32:54 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787464435892228096junit12 days ago
May 06 15:16:58.068 - 28s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-21-156.us-west-1.compute.internal" not ready since 2024-05-06 15:16:52 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?]) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 15:17:26.723 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-21-156.us-west-1.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 15:17:18.473964       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 15:17:18.474192       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715008638 cert, and key in /tmp/serving-cert-2443708003/serving-signer.crt, /tmp/serving-cert-2443708003/serving-signer.key\nStaticPodsDegraded: I0506 15:17:18.702962       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 15:17:18.704709       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-21-156.us-west-1.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 15:17:18.704835       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0506 15:17:18.705467       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2443708003/tls.crt::/tmp/serving-cert-2443708003/tls.key"\nStaticPodsDegraded: F0506 15:17:18.908651       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 15:22:42.082 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-106-148.us-west-1.compute.internal" not ready since 2024-05-06 15:22:34 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787514342317494272junit12 days ago
May 06 17:39:55.028 - 34s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-56-235.us-east-2.compute.internal" not ready since 2024-05-06 17:37:55 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 17:40:29.448 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-56-235.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 17:40:21.666082       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 17:40:21.666272       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715017221 cert, and key in /tmp/serving-cert-310148559/serving-signer.crt, /tmp/serving-cert-310148559/serving-signer.key\nStaticPodsDegraded: I0506 17:40:22.033054       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 17:40:22.034515       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-56-235.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 17:40:22.034619       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0506 17:40:22.035250       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-310148559/tls.crt::/tmp/serving-cert-310148559/tls.key"\nStaticPodsDegraded: F0506 17:40:22.217830       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 17:45:21.027 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-10-203.us-east-2.compute.internal" not ready since 2024-05-06 17:43:21 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787514342317494272junit12 days ago
May 06 18:43:07.367 - 37s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-115-2.us-east-2.compute.internal" not ready since 2024-05-06 18:41:07 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 18:43:44.469 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-115-2.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 18:43:36.752928       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 18:43:36.753149       1 crypto.go:601] Generating new CA for check-endpoints-signer@1715021016 cert, and key in /tmp/serving-cert-2516561399/serving-signer.crt, /tmp/serving-cert-2516561399/serving-signer.key\nStaticPodsDegraded: I0506 18:43:37.004986       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 18:43:37.006440       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-115-2.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 18:43:37.006568       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0506 18:43:37.007456       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2516561399/tls.crt::/tmp/serving-cert-2516561399/tls.key"\nStaticPodsDegraded: F0506 18:43:37.261540       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
May 06 18:49:06.599 E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-56-235.us-east-2.compute.internal" not ready since 2024-05-06 18:48:43 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
#1787409713231564800junit12 days ago
May 06 11:58:11.029 - 12s   E clusteroperator/kube-apiserver condition/Degraded reason/NodeController_MasterNodesReady status/True NodeControllerDegraded: The master nodes not ready: node "ip-10-0-19-100.us-east-2.compute.internal" not ready since 2024-05-06 11:57:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) (exception: We are not worried about Degraded=True blips for update tests yet.)
May 06 11:58:23.607 W clusteroperator/kube-apiserver condition/Degraded reason/AsExpected status/False NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-19-100.us-east-2.compute.internal container "kube-apiserver-check-endpoints" is terminated: Error: W0506 11:58:14.114473       1 cmd.go:245] Using insecure, self-signed certificates\nStaticPodsDegraded: I0506 11:58:14.114881       1 crypto.go:601] Generating new CA for check-endpoints-signer@1714996694 cert, and key in /tmp/serving-cert-2456414698/serving-signer.crt, /tmp/serving-cert-2456414698/serving-signer.key\nStaticPodsDegraded: I0506 11:58:14.765828       1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0506 11:58:14.774854       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-19-100.us-east-2.compute.internal": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0506 11:58:14.774975       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519\nStaticPodsDegraded: I0506 11:58:14.788141       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2456414698/tls.crt::/tmp/serving-cert-2456414698/tls.key"\nStaticPodsDegraded: F0506 11:58:15.213346       1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get "https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: (exception: Degraded=False is the happy case)
#1787409713231564800junit12 days ago
I0506 09:47:16.292272       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
E0506 09:49:43.031178       1 leaderelection.go:332] error retrieving resource lock openshift-cloud-controller-manager/cloud-controller-manager: Get "https://api-int.ci-op-xng25khy-28ab6.aws-2.ci.openshift.org:6443/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-controller-manager/leases/cloud-controller-manager?timeout=53.5s": dial tcp 10.0.68.13:6443: connect: connection refused
I0506 09:50:04.168584       1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206
#1787251864727719936junit12 days ago
I0506 01:13:37.256164       1 observer_polling.go:159] Starting file observer
W0506 01:13:37.276183       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-51-172.us-west-2.compute.internal": dial tcp [::1]:6443: connect: connection refused
I0506 01:13:37.276364       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

#1787250734627033088junit12 days ago
I0506 01:13:33.863393       1 observer_polling.go:159] Starting file observer
W0506 01:13:33.875854       1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-115-134.ec2.internal": dial tcp [::1]:6443: connect: connection refused
I0506 01:13:33.875977       1 builder.go:299] check-endpoints version v4.0.0-alpha.0-1977-g8fae6b5-8fae6b519

... 3 lines not shown

Found in 70.43% of runs (238.24% of failures) across 115 total runs and 1 jobs (29.57% failed) in 1.269s - clear search | chart view - source code located on github