Uploaded image for project: 'National Data Service'
  1. National Data Service
  2. NDS-700

/var/glfs/global: Transport endpoint is not connected

    XMLWordPrintableJSON

Details

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Critical Critical
    • None
    • None
    • Infrastructure
    • None
    • NDS Sprint 18

    Description

      For some unknown reason, the NFS mount to /var/glfs/global can fail or become disconnected in some way.

      This has happened on both Labs Workbench Beta and my small-scale mltest cluster.

      The most obvious way to see this error is by simply running df:

      core@workbench-node1 ~ $ df
      df: /var/glfs/global: Transport endpoint is not connected
      Filesystem     1K-blocks     Used Available Use% Mounted on
      devtmpfs        32973900        0  32973900   0% /dev
      tmpfs           32990036        0  32990036   0% /dev/shm
      tmpfs           32990036     2272  32987764   1% /run
      tmpfs           32990036        0  32990036   0% /sys/fs/cgroup
      /dev/vda9       78853524  5140600  70376896   7% /
      /dev/vda4        1007760   619300    336444  65% /usr
      tmpfs           32990036        0  32990036   0% /media
      tmpfs           32990036        0  32990036   0% /tmp
      /dev/vda1         130798    72466     58332  56% /boot
      /dev/vda6         110576       64    101340   1% /usr/share/oem
      /dev/vdb       104806400 58902200  45904200  57% /var/lib/docker
      tmpfs            6598008        0   6598008   0% /run/user/500
      core@workbench-node1 ~ $

      The output from dmesg -T when this error occurs can be found below:

      core@mltest-node1 ~ $ dmesg -T
      [Tue Nov 29 09:26:31 2016] Linux version 4.7.3-coreos-r2 (jenkins@jenkins-os-executor-1.c.coreos-gce-testing.internal) (gcc version 4.9.3 (Gentoo Hardened 4.9.3 p1.5, pie-0.6.4) ) #1 SMP Tue Nov 1 01:38:43 UTC 2016
      [Tue Nov 29 09:26:31 2016] Command line: BOOT_IMAGE=/coreos/vmlinuz-b mount.usr=PARTUUID=e03dd35c-7c2d-4a47-b3fe-27f15780a57c rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 coreos.oem.id=openstack
      [Tue Nov 29 09:26:31 2016] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
      [Tue Nov 29 09:26:31 2016] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
      [Tue Nov 29 09:26:31 2016] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
      [Tue Nov 29 09:26:31 2016] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
      [Tue Nov 29 09:26:31 2016] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
              .           .           .           
      [Tue Nov 29 13:08:48 2016] overlayfs: upper fs needs to support d_type.
      [Tue Nov 29 13:08:48 2016] overlayfs: upper fs needs to support d_type.
      [Tue Nov 29 13:08:48 2016] overlayfs: upper fs needs to support d_type.
      [Tue Nov 29 13:09:07 2016] veth173c410: renamed from eth0
      [Tue Nov 29 13:09:07 2016] docker0: port 4(veth8300865) entered disabled state
      [Tue Nov 29 13:09:07 2016] docker0: port 4(veth8300865) entered blocking state
      [Tue Nov 29 13:09:07 2016] docker0: port 4(veth8300865) entered forwarding state
      [Tue Nov 29 13:09:07 2016] docker0: port 4(veth8300865) entered disabled state
      [Tue Nov 29 13:09:07 2016] device veth8300865 left promiscuous mode
      [Tue Nov 29 13:09:07 2016] docker0: port 4(veth8300865) entered disabled state
      [Tue Nov 29 13:09:18 2016] docker0: port 5(veth6d2c88f) entered disabled state
      [Tue Nov 29 13:09:18 2016] vethb1d0170: renamed from eth0
      [Tue Nov 29 13:09:18 2016] docker0: port 5(veth6d2c88f) entered blocking state
      [Tue Nov 29 13:09:18 2016] docker0: port 5(veth6d2c88f) entered forwarding state
      [Tue Nov 29 13:09:18 2016] docker0: port 5(veth6d2c88f) entered disabled state
      [Tue Nov 29 13:09:18 2016] device veth6d2c88f left promiscuous mode
      [Tue Nov 29 13:09:18 2016] docker0: port 5(veth6d2c88f) entered disabled state
      [Tue Nov 29 13:14:50 2016] blk_update_request: I/O error, dev vdb, sector 161315632
      [Tue Nov 29 13:14:50 2016] blk_update_request: I/O error, dev vdb, sector 161315624
      [Tue Nov 29 13:14:50 2016] Buffer I/O error on dev vdb, logical block 20164453, lost async page write
      [Tue Nov 29 13:14:50 2016] blk_update_request: I/O error, dev vdb, sector 104953929
      [Tue Nov 29 13:14:50 2016] blk_update_request: I/O error, dev vdb, sector 104953929
      [Tue Nov 29 13:14:50 2016] XFS (vdb): metadata I/O error: block 0x6417849 ("xlog_iodone") error 5 numblks 64
      [Tue Nov 29 13:14:50 2016] XFS (vdb): xfs_do_force_shutdown(0x2) called from line 1200 of file ../source/fs/xfs/xfs_log.c.  Return add
      [Tue Nov 29 13:14:50 2016] XFS (vdb): Log I/O Error Detected.  Shutting down filesystem
      [Tue Nov 29 13:14:50 2016] XFS (vdb): Please umount the filesystem and rectify the problem(s)
      [Tue Nov 29 13:14:50 2016] XFS (vdb): xfs_log_force: error -5 returned.
      [Tue Nov 29 13:14:50 2016] blk_update_request: I/O error, dev vdb, sector 104953931
      [Tue Nov 29 13:14:50 2016] blk_update_request: I/O error, dev vdb, sector 104953934
      [Tue Nov 29 13:14:50 2016] blk_update_request: I/O error, dev vdb, sector 104953937
      [Tue Nov 29 13:14:50 2016] blk_update_request: I/O error, dev vdb, sector 104953940
      [Tue Nov 29 13:14:50 2016] XFS (vdb): metadata I/O error: block 0x641784b ("xlog_iodone") error 5 numblks 64
      [Tue Nov 29 13:14:50 2016] XFS (vdb): xfs_do_force_shutdown(0x2) called from line 1200 of file ../source/fs/xfs/xfs_log.c.  Return add
      [Tue Nov 29 13:14:50 2016] XFS (vdb): metadata I/O error: block 0x641784e ("xlog_iodone") error 5 numblks 64
      [Tue Nov 29 13:14:50 2016] XFS (vdb): xfs_do_force_shutdown(0x2) called from line 1200 of file ../source/fs/xfs/xfs_log.c.  Return add
      [Tue Nov 29 13:14:50 2016] XFS (vdb): metadata I/O error: block 0x6417851 ("xlog_iodone") error 5 numblks 64
      [Tue Nov 29 13:14:50 2016] XFS (vdb): xfs_do_force_shutdown(0x2) called from line 1200 of file ../source/fs/xfs/xfs_log.c.  Return add
      [Tue Nov 29 13:14:50 2016] XFS (vdb): metadata I/O error: block 0x6417854 ("xlog_iodone") error 5 numblks 64
      [Tue Nov 29 13:14:50 2016] XFS (vdb): xfs_do_force_shutdown(0x2) called from line 1200 of file ../source/fs/xfs/xfs_log.c.  Return add
      [Tue Nov 29 13:14:50 2016] XFS (vdb): xfs_log_force: error -5 returned.
      [Tue Nov 29 13:14:50 2016] blk_update_request: I/O error, dev vdb, sector 161315640
      [Tue Nov 29 13:14:50 2016] blk_update_request: I/O error, dev vdb, sector 161315648
      [Tue Nov 29 13:14:50 2016] Buffer I/O error on dev vdb, logical block 20164454, lost async page write
      [Tue Nov 29 13:14:50 2016] Buffer I/O error on dev vdb, logical block 20164455, lost async page write
      [Tue Nov 29 13:14:50 2016] Buffer I/O error on dev vdb, logical block 20164456, lost async page write
      [Tue Nov 29 13:14:50 2016] Buffer I/O error on dev vdb, logical block 289747, lost async page write
      [Tue Nov 29 13:14:50 2016] Buffer I/O error on dev vdb, logical block 291269, lost async page write
      [Tue Nov 29 13:15:13 2016] XFS (vdb): xfs_log_force: error -5 returned.
      [Tue Nov 29 13:15:43 2016] XFS (vdb): xfs_log_force: error -5 returned.
      [Tue Nov 29 13:16:13 2016] XFS (vdb): xfs_log_force: error -5 returned.
      [Tue Nov 29 13:16:43 2016] XFS (vdb): xfs_log_force: error -5 returned.
      [Tue Nov 29 13:17:13 2016] XFS (vdb): xfs_log_force: error -5 returned.
      [Tue Nov 29 13:17:43 2016] XFS (vdb): xfs_log_force: error -5 returned.
              .           .           .           

      More investigation is needed into both the cause of this issue, as well as the circumstances under which it occurs.

      Gliffy Diagrams

        Attachments

          Activity

            People

              raila David Raila
              lambert8 Sara Lambert
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - 4 hours
                  4h
                  Remaining:
                  Remaining Estimate - 4 hours
                  4h
                  Logged:
                  Time Spent - Not Specified
                  Not Specified

                  Tasks