Contents

  1. Failures
    1. Failed Commands
      1. workunit test rados/test_python.sh
      2. workunit test rbd/test_librbd_python.sh
      3. workunit test rbd/test_librbd.sh
      4. workunit test rados/test_alloc_hint.sh
      5. workunit test suites/pjd.sh
      6. workunit test suites/ffsb.sh
      7. workunit test fs/misc/multiple_rsync.sh
      8. workunit test fs/misc/trivial_sync.sh
      9. workunit test suites/fsstress.sh
      10. workunit test fs/misc/dirfrag.sh
      11. workunit test direct_io/misc.sh
      12. workunit test snaps/snaptest-snap-rename.sh
      13. workunit test rbd/map-snapshot-io.sh
      14. workunit test rbd/kernel.sh
      15. workunit test rbd/qemu-iotests.sh
      16. workunit test rbd/merge_diff.sh
      17. /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t'
      18. ceph_test_async_driver
      19. sudo mount -o rw,hard,intr,nfsvers=3 msteuth05.ceph:/home/ubuntu/cephtest/mnt.0 /home/ubuntu/cephtest/mnt.1
      20. ERROR: test_client_pin (tasks.mds_client_limits.TestClientLimits)
      21. marginal:fs-misc/{clusters/two_clients.yaml fs/btrfs.yaml tasks/locktest.yaml}
      22. marginal:mds_restart/{clusters/one_mds.yaml tasks/restart-workunit-backtraces.yaml}
      23. ceph_test_filejournal fails
      24. test_alloc_hint fail
      25. ceph_test_async_driver fail
    2. Others
      1. cluster [WRN] ... slow requests
      2. cluster [WRN] OSD near full (90%)
      3. Administratively prohibited
      4. fs:standbyreplay
  2. Hangs
    1. Storage Space Depleted
      1. reached critical levels of available space on local monitor storage -- shutdown!
      2. problem writing to /var/log/ceph/ceph-osd.4.log: (28) No space left on device
      3. cluster [ERR] OSD full dropping all updates 100% full
      4. [Errno 28] No space left on device
    2. Failed Assertions
      1. FAILED assert(_head.empty())
      2. FAILED assert(r == 0)
      3. FAILED assert(sub_info.rbytes == fnode.rstat.rbytes)
      4. FAILED assert(it != import_state.end())
    3. Other
      1. ConnectionLostError: SSH connection to msteuth14 was lost
      2. No progess


Failures

Failed Commands

workunit test rados/test_python.sh

workunit test rbd/test_librbd_python.sh

  • Status
  • Affected Tests
    • smoke:basic/{clusters/fixed-3-cephfs.yaml fs/btrfs.yaml tasks/rbd_python_api_tests.yaml}
    • rbd:librbd/{cache/none.yaml cachepool/small.yaml clusters/fixed-3.yaml copy-on-read/off.yaml fs/btrfs.yaml msgr-failures/few.yaml workloads/python_api_tests.yaml}
    • rbd:librbd/{cache/writeback.yaml cachepool/none.yaml clusters/fixed-3.yaml copy-on-read/off.yaml fs/btrfs.yaml msgr-failures/few.yaml workloads/python_api_tests.yaml}
    • rbd:librbd/{cache/writethrough.yaml cachepool/none.yaml clusters/fixed-3.yaml copy-on-read/on.yaml fs/btrfs.yaml msgr-failures/few.yaml workloads/python_api_tests.yaml}
    • rbd:librbd/{cache/none.yaml cachepool/none.yaml clusters/fixed-3.yaml copy-on-read/off.yaml fs/btrfs.yaml msgr-failures/few.yaml workloads/python_api_tests_with_object_map.yaml}
    • rbd:librbd/{cache/none.yaml cachepool/none.yaml clusters/fixed-3.yaml copy-on-read/on.yaml fs/btrfs.yaml msgr-failures/few.yaml workloads/python_api_tests.yaml}
    • rbd:librbd/{cache/writethrough.yaml cachepool/small.yaml clusters/fixed-3.yaml copy-on-read/off.yaml fs/btrfs.yaml msgr-failures/few.yaml workloads/python_api_tests_with_object_map.yaml}
  • Sample Output

workunit test rbd/test_librbd.sh

  • Status
  • Affected Tests
    • rbd:thrash/{base/install.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml thrashers/cache.yaml workloads/rbd_api_tests_no_locking.yaml}
  • Sample Output

workunit test rados/test_alloc_hint.sh

  • Status
  • Affected Tests
    • rados:objectstore/alloc-hint.yaml
  • Sample Output

workunit test suites/pjd.sh

  • Status
  • Affected Tests
    • multimds:basic/{ceph/base.yaml clusters/3-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml mount/cfuse.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/suites_pjd.yaml}
    • multimds:basic/{ceph/base.yaml clusters/9-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml mount/cfuse.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/suites_pjd.yaml}
    • multimds:basic/{ceph/base.yaml clusters/3-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml mount/cfuse.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/suites_pjd.yaml}
    • multimds:basic/{ceph/base.yaml clusters/9-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml mount/cfuse.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/suites_pjd.yaml}
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/kclient.yaml tasks/workunit_suites_pjd.yaml thrash/exports.yaml}
  • Sample Output

workunit test suites/ffsb.sh

  • Status
  • Affected Tests
    • multimds:basic/{ceph/base.yaml clusters/9-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml mount/kclient.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/suites_ffsb.yaml}
  • Sample Output

workunit test fs/misc/multiple_rsync.sh

  • Status
  • Affected Tests
    • multimds:basic/{ceph/base.yaml clusters/9-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml mount/kclient.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/misc.yaml}
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/kclient.yaml tasks/workunit_misc.yaml thrash/normal.yaml}
  • Sample Output

workunit test fs/misc/trivial_sync.sh

  • Status
  • Affected Tests
    • multimds:basic/{ceph/base.yaml clusters/3-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml mount/cfuse.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/misc.yaml}
  • Sample Output

workunit test suites/fsstress.sh

  • Status
  • Affected Tests
    • marginal:multimds/{clusters/3-node-3-mds.yaml fs/btrfs.yaml mounts/kclient.yaml tasks/workunit_suites_fsstress.yaml thrash/exports.yaml}
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/kclient.yaml tasks/workunit_suites_fsstress.yaml thrash/exports.yaml}
  • Sample Output

workunit test fs/misc/dirfrag.sh

  • Status
  • Affected Tests
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/kclient.yaml tasks/workunit_misc.yaml thrash/exports.yaml}
  • Sample Output

workunit test direct_io/misc.sh

  • Status
  • Affected Tests
    • kcephfs:cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/btrfs.yaml inline/yes.yaml tasks/kclient_workunit_direct_io.yaml}
  • Sample Output

2015-07-20T23:10:08.053 INFO:tasks.workunit:Running workunit direct_io/misc.sh...
2015-07-20T23:10:08.054 INFO:teuthology.orchestra.run.msteuth07:Running (workunit test direct_io/misc.sh): 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=workunit-fixes-for-aarch64 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/direct_io/misc.sh'
2015-07-20T23:10:08.092 INFO:tasks.workunit.client.0.msteuth07.stderr:+ echo test read from hole
2015-07-20T23:10:08.094 INFO:tasks.workunit.client.0.msteuth07.stderr:+ dd if=/dev/zero of=dd3 bs=1 seek=1048576 count=0
2015-07-20T23:10:08.095 INFO:tasks.workunit.client.0.msteuth07.stdout:test read from hole
2015-07-20T23:10:08.099 INFO:tasks.workunit.client.0.msteuth07.stderr:0+0 records in
2015-07-20T23:10:08.100 INFO:tasks.workunit.client.0.msteuth07.stderr:0+0 records out
2015-07-20T23:10:08.101 INFO:tasks.workunit.client.0.msteuth07.stderr:0 bytes (0 B) copied, 0.000225443 s, 0.0 kB/s
2015-07-20T23:10:08.103 INFO:tasks.workunit.client.0.msteuth07.stderr:+ dd if=dd3 of=/tmp/ddout1 skip=8 bs=512 count=2 iflag=direct
2015-07-20T23:10:08.149 INFO:tasks.workunit.client.0.msteuth07.stderr:0+0 records in
2015-07-20T23:10:08.150 INFO:tasks.workunit.client.0.msteuth07.stderr:0+0 records out
2015-07-20T23:10:08.152 INFO:tasks.workunit.client.0.msteuth07.stderr:0 bytes (0 B) copied, 0.0417777 s, 0.0 kB/s
2015-07-20T23:10:08.153 INFO:tasks.workunit:Stopping ['direct_io'] on client.0...
2015-07-20T23:10:08.155 INFO:teuthology.orchestra.run.msteuth07:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/workunit.client.0'
2015-07-20T23:10:08.207 ERROR:teuthology.parallel:Exception in parallel execution
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 82, in __exit__
    for result in self:
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/ceph-qa-suite_no_valgrind/tasks/workunit.py", line 361, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 137, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 378, in run
    r.wait()
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 114, in wait
    label=self.label)
CommandFailedError: Command failed (workunit test direct_io/misc.sh) on msteuth07 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=workunit-fixes-for-aarch64 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/direct_io/misc.sh'

workunit test snaps/snaptest-snap-rename.sh

workunit test rbd/map-snapshot-io.sh

  • Status
  • Affected Tests
    • krbd:rbd-nomount/{clusters/fixed-3.yaml conf.yaml fs/btrfs.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_map_snapshot_io.yaml}
    • krbd:rbd-nomount/{clusters/fixed-3.yaml conf.yaml fs/btrfs.yaml install/ceph.yaml msgr-failures/few.yaml tasks/rbd_map_snapshot_io.yaml}
  • Sample Output

workunit test rbd/kernel.sh

  • Status
  • Affected Tests
    • krbd:rbd-nomount/{clusters/fixed-3.yaml conf.yaml fs/btrfs.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_kernel.yaml}
  • Sample Output

workunit test rbd/qemu-iotests.sh

2015-07-29T16:37:01.264 INFO:tasks.workunit.client.0.msteuth05.stderr:+ [ qemu/tests/qemu-iotests = qemu/tests/qemu-iotests ]
2015-07-29T16:37:01.265 INFO:tasks.workunit.client.0.msteuth05.stderr:+ git clone git://apt-mirror.front.sepia.ceph.com/qemu.git
2015-07-29T16:37:01.267 INFO:tasks.workunit.client.0.msteuth05.stderr:Cloning into 'qemu'...
2015-07-29T16:39:08.530 INFO:tasks.workunit.client.0.msteuth05.stderr:fatal: unable to connect to apt-mirror.front.sepia.ceph.com:
2015-07-29T16:39:08.531 INFO:tasks.workunit.client.0.msteuth05.stderr:apt-mirror.front.sepia.ceph.com[0: 10.214.134.138]: errno=Connection timed out
2015-07-29T16:39:08.532 INFO:tasks.workunit.client.0.msteuth05.stderr:
2015-07-29T16:39:08.536 INFO:tasks.workunit:Stopping ['rbd/qemu-iotests.sh'] on client.0...

workunit test rbd/merge_diff.sh

CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=workunit-fixes-for-aarch64 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/merge_diff.sh'
2015-07-29T16:37:32.901 INFO:tasks.workunit.client.0.msteuth04.stderr:+ pool=rbd
2015-07-29T16:37:32.902 INFO:tasks.workunit.client.0.msteuth04.stderr:+ gen=rbd/gen
2015-07-29T16:37:32.903 INFO:tasks.workunit.client.0.msteuth04.stderr:+ out=rbd/out
2015-07-29T16:37:32.904 INFO:tasks.workunit.client.0.msteuth04.stderr:+ testno=1
2015-07-29T16:37:32.905 INFO:tasks.workunit.client.0.msteuth04.stderr:+ mkdir -p merge_diff_test
2015-07-29T16:37:32.906 INFO:tasks.workunit.client.0.msteuth04.stderr:+ pushd merge_diff_test
2015-07-29T16:37:32.907 INFO:tasks.workunit.client.0.msteuth04.stderr:+ rebuild 22 4194304 1 2
2015-07-29T16:37:32.908 INFO:tasks.workunit.client.0.msteuth04.stderr:+ clear_all
2015-07-29T16:37:32.909 INFO:tasks.workunit.client.0.msteuth04.stderr:+ fusermount -u mnt
2015-07-29T16:37:32.910 INFO:tasks.workunit.client.0.msteuth04.stdout:~/cephtest/mnt.0/client.0/tmp/merge_diff_test ~/cephtest/mnt.0/client.0/tmp
2015-07-29T16:37:32.912 INFO:tasks.workunit.client.0.msteuth04.stderr:fusermount: entry for /home/ubuntu/cephtest/mnt.0/client.0/tmp/merge_diff_test/mnt not found in /etc/mtab
2015-07-29T16:37:32.913 INFO:tasks.workunit.client.0.msteuth04.stderr:+ true
2015-07-29T16:37:32.914 INFO:tasks.workunit.client.0.msteuth04.stderr:+ rbd snap purge --no-progress rbd/gen
2015-07-29T16:37:32.971 INFO:tasks.workunit.client.0.msteuth04.stderr:2015-07-29 20:37:32.971585 7f8e85f000 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
2015-07-29T16:37:32.973 INFO:tasks.workunit.client.0.msteuth04.stderr:rbd: error opening image gen: (2) No such file or directory
2015-07-29T16:37:32.976 INFO:tasks.workunit.client.0.msteuth04.stderr:+ true
2015-07-29T16:37:32.977 INFO:tasks.workunit.client.0.msteuth04.stderr:+ rbd rm --no-progress rbd/gen
2015-07-29T16:37:33.041 INFO:tasks.workunit.client.0.msteuth04.stderr:2015-07-29 20:37:33.041394 7faa67b000 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
2015-07-29T16:37:33.047 INFO:tasks.workunit.client.0.msteuth04.stderr:rbd: delete error: (2) No such file or directory
2015-07-29T16:37:33.048 INFO:tasks.workunit.client.0.msteuth04.stderr:2015-07-29 20:37:33.046935 7faa67b000 -1 librbd: error removing img from new-style directory: (2) No such file or directory
2015-07-29T16:37:33.050 INFO:tasks.workunit.client.0.msteuth04.stderr:+ true
2015-07-29T16:37:33.051 INFO:tasks.workunit.client.0.msteuth04.stderr:+ rbd snap purge --no-progress rbd/out
2015-07-29T16:37:33.118 INFO:tasks.workunit.client.0.msteuth04.stderr:2015-07-29 20:37:33.116893 7fb0bdc000 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
2015-07-29T16:37:33.119 INFO:tasks.workunit.client.0.msteuth04.stderr:rbd: error opening image out: (2) No such file or directory
2015-07-29T16:37:33.121 INFO:tasks.workunit.client.0.msteuth04.stderr:+ true
2015-07-29T16:37:33.122 INFO:tasks.workunit.client.0.msteuth04.stderr:+ rbd rm --no-progress rbd/out
2015-07-29T16:37:33.188 INFO:tasks.workunit.client.0.msteuth04.stderr:2015-07-29 20:37:33.188586 7f86f84000 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
2015-07-29T16:37:33.194 INFO:tasks.workunit.client.0.msteuth04.stderr:rbd: delete error: (2) No such file or directory
2015-07-29T16:37:33.195 INFO:tasks.workunit.client.0.msteuth04.stderr:2015-07-29 20:37:33.193986 7f86f84000 -1 librbd: error removing img from new-style directory: (2) No such file or directory
2015-07-29T16:37:33.199 INFO:tasks.workunit.client.0.msteuth04.stderr:+ true
2015-07-29T16:37:33.199 INFO:tasks.workunit.client.0.msteuth04.stderr:+ rm -rf diffs
2015-07-29T16:37:33.202 INFO:tasks.workunit.client.0.msteuth04.stderr:+ echo Starting test 1
2015-07-29T16:37:33.203 INFO:tasks.workunit.client.0.msteuth04.stderr:+ (( testno++ ))
2015-07-29T16:37:33.204 INFO:tasks.workunit.client.0.msteuth04.stderr:+ rbd create rbd/gen --size 100 --order 22 --stripe_unit 4194304 --stripe_count 1 --image-format 2
2015-07-29T16:37:33.205 INFO:tasks.workunit.client.0.msteuth04.stdout:Starting test 1
2015-07-29T16:37:33.362 INFO:tasks.workunit.client.0.msteuth04.stderr:+ rbd create rbd/out --size 1 --order 19
2015-07-29T16:37:33.475 INFO:tasks.workunit.client.0.msteuth04.stderr:+ mkdir -p mnt diffs
2015-07-29T16:37:33.477 INFO:tasks.workunit.client.0.msteuth04.stderr:+ LD_PRELOAD=liblttng-ust-fork.so.0
2015-07-29T16:37:33.478 INFO:tasks.workunit.client.0.msteuth04.stderr:+ rbd-fuse -p rbd mnt
2015-07-29T16:37:33.655 INFO:tasks.workunit.client.0.msteuth04.stderr:/home/ubuntu/cephtest/workunit.client.0/rbd/merge_diff.sh: line 29:  4267 Segmentation fault      (core dumped) LD_PRELOAD=liblttng-ust-fork.so.0 rbd-fuse -p $pool mnt
2015-07-29T16:37:33.659 INFO:tasks.workunit:Stopping ['rbd/merge_diff.sh'] on client.0...

/home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t'

  • Status
    • Missing commmand added in later commit. Retest with latest Ceph (v9.0.2).
  • Affected Tests
    • rbd:singleton/{all/formatted-output.yaml}
  • Sample Output

rbd: error parsing command 'disk-usage'; -h or --help for usage

ceph_test_async_driver

  • Status
  • Affected Tests
    • rados:singleton-nomsgr/{all/msgr.yaml}
  • Sample Output

sudo mount -o rw,hard,intr,nfsvers=3 msteuth05.ceph:/home/ubuntu/cephtest/mnt.0 /home/ubuntu/cephtest/mnt.1

  • Status
  • Affected Tests
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v3.yaml tasks/nfs_workunit_suites_dbench.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v3.yaml tasks/nfs_workunit_suites_blogbench.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v3.yaml tasks/nfs_workunit_misc.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v3.yaml tasks/nfs-workunit-kernel-untar-build.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v4.yaml tasks/nfs_workunit_suites_fsstress.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v4.yaml tasks/nfs_workunit_suites_iozone.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v4.yaml tasks/nfs_workunit_suites_dbench.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v4.yaml tasks/nfs_workunit_suites_blogbench.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v4.yaml tasks/nfs_workunit_suites_ffsb.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v3.yaml tasks/nfs_workunit_suites_iozone.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v3.yaml tasks/nfs_workunit_suites_fsstress.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v3.yaml tasks/nfs_workunit_suites_ffsb.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v4.yaml tasks/nfs-workunit-kernel-untar-build.yaml}
    • knfs:basic/{ceph/base.yaml clusters/extra-client.yaml debug/mds.yaml fs/btrfs.yaml mount/v4.yaml tasks/nfs_workunit_misc.yaml}
  • Sample Output

2015-07-30T11:50:34.382 INFO:teuthology.orchestra.run.msteuth07:Running: 'sudo mount -o rw,hard,intr,nfsvers=3 msteuth05.ceph:/home/ubuntu/cephtest/mnt.0 /home/ubuntu/cephtest/mnt.1'
2015-07-30T11:50:34.437 INFO:teuthology.orchestra.run.msteuth07.stderr:mount.nfs: requested NFS version or transport protocol is not supported
2015-07-30T11:50:34.441 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 55, in run_tasks
    manager.__enter__()
  File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/teuthworker/src/teuthology_master/teuthology/task/nfs.py", line 106, in task
    mnt
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 137, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 378, in run
    r.wait()
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 114, in wait
    label=self.label)
CommandFailedError: Command failed on msteuth07 with status 32: 'sudo mount -o rw,hard,intr,nfsvers=3 msteuth05.ceph:/home/ubuntu/cephtest/mnt.0 /home/ubuntu/cephtest/mnt.1'

ERROR: test_client_pin (tasks.mds_client_limits.TestClientLimits)

  • Status
  • Affected Tests
    • fs:recovery/{clusters/2-remote-clients.yaml debug/mds_client.yaml mounts/ceph-fuse.yaml tasks/client-limits.yaml}
  • Sample Output

2015-07-31T06:49:29.827 INFO:tasks.cephfs.cephfs_test_case:======================================================================
2015-07-31T06:49:29.828 INFO:tasks.cephfs.cephfs_test_case:ERROR: test_client_pin (tasks.mds_client_limits.TestClientLimits)
2015-07-31T06:49:29.829 INFO:tasks.cephfs.cephfs_test_case:----------------------------------------------------------------------
2015-07-31T06:49:29.830 INFO:tasks.cephfs.cephfs_test_case:Traceback (most recent call last):
2015-07-31T06:49:29.831 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/ceph-qa-suite_no_valgrind/tasks/mds_client_limits.py", line 109, in test_client_pin
2015-07-31T06:49:29.831 INFO:tasks.cephfs.cephfs_test_case:    self._test_client_pin(True)
2015-07-31T06:49:29.832 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/ceph-qa-suite_no_valgrind/tasks/mds_client_limits.py", line 80, in _test_client_pin
2015-07-31T06:49:29.833 INFO:tasks.cephfs.cephfs_test_case:    reject_fn=lambda x: x > open_files + 2)
2015-07-31T06:49:29.834 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/ceph-qa-suite_no_valgrind/tasks/cephfs/cephfs_test_case.py", line 109, in wait_until_equal
2015-07-31T06:49:29.835 INFO:tasks.cephfs.cephfs_test_case:    val = get_fn()
2015-07-31T06:49:29.835 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/ceph-qa-suite_no_valgrind/tasks/mds_client_limits.py", line 77, in <lambda>
2015-07-31T06:49:29.836 INFO:tasks.cephfs.cephfs_test_case:    self.wait_until_equal(lambda: self.get_session(mount_a_client_id)['num_caps'],
2015-07-31T06:49:29.837 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/ceph-qa-suite_no_valgrind/tasks/cephfs/cephfs_test_case.py", line 98, in get_session
2015-07-31T06:49:29.838 INFO:tasks.cephfs.cephfs_test_case:    session_ls = self.fs.mds_asok(['session', 'ls'])
2015-07-31T06:49:29.839 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/ceph-qa-suite_no_valgrind/tasks/cephfs/filesystem.py", line 315, in mds_asok
2015-07-31T06:49:29.840 INFO:tasks.cephfs.cephfs_test_case:    return self.json_asok(command, 'mds', mds_id)
2015-07-31T06:49:29.840 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/ceph-qa-suite_no_valgrind/tasks/cephfs/filesystem.py", line 303, in json_asok
2015-07-31T06:49:29.841 INFO:tasks.cephfs.cephfs_test_case:    proc = self.mon_manager.admin_socket(service_type, service_id, command)
2015-07-31T06:49:29.842 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/ceph-qa-suite_no_valgrind/tasks/ceph_manager.py", line 920, in admin_socket
2015-07-31T06:49:29.843 INFO:tasks.cephfs.cephfs_test_case:    check_status=check_status
2015-07-31T06:49:29.844 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 137, in run
2015-07-31T06:49:29.845 INFO:tasks.cephfs.cephfs_test_case:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2015-07-31T06:49:29.846 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 378, in run
2015-07-31T06:49:29.846 INFO:tasks.cephfs.cephfs_test_case:    r.wait()
2015-07-31T06:49:29.847 INFO:tasks.cephfs.cephfs_test_case:  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 114, in wait
2015-07-31T06:49:29.848 INFO:tasks.cephfs.cephfs_test_case:    label=self.label)
2015-07-31T06:49:29.849 INFO:tasks.cephfs.cephfs_test_case:CommandFailedError: Command failed on teuth6 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --admin-daemon /var/run/ceph/ceph-mds.a.asok session ls'

marginal:fs-misc/{clusters/two_clients.yaml fs/btrfs.yaml tasks/locktest.yaml}

marginal:mds_restart/{clusters/one_mds.yaml tasks/restart-workunit-backtraces.yaml}

ceph_test_filejournal fails

2015-07-24T14:53:54.541 INFO:teuthology.orchestra.run.teuth9.stdout:[----------] Global test environment tear-down
2015-07-24T14:53:54.542 INFO:teuthology.orchestra.run.teuth9.stdout:[==========] 12 tests from 1 test case ran. (17276 ms total)
2015-07-24T14:53:54.542 INFO:teuthology.orchestra.run.teuth9.stdout:[  PASSED  ] 11 tests.
2015-07-24T14:53:54.543 INFO:teuthology.orchestra.run.teuth9.stdout:[  FAILED  ] 1 test, listed below:
2015-07-24T14:53:54.544 INFO:teuthology.orchestra.run.teuth9.stdout:[  FAILED  ] TestFileJournal.ReplayCorrupt
2015-07-24T14:53:54.545 INFO:teuthology.orchestra.run.teuth9.stdout:
2015-07-24T14:53:54.546 INFO:teuthology.orchestra.run.teuth9.stdout: 1 FAILED TEST
CommandFailedError: Command failed on teuth9 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_filejournal'

test_alloc_hint fail

2015-07-24T08:14:12.218 INFO:tasks.workunit.client.0.teuth9.stderr:+ fns=(${OSD_DATA[i]}/current/${PGID}*_head/${OBJ}_*)
2015-07-24T08:14:12.222 INFO:tasks.workunit.client.0.teuth9.stderr:+ local fns
2015-07-24T08:14:12.223 INFO:tasks.workunit.client.0.teuth9.stderr:+ local count=1
2015-07-24T08:14:12.224 INFO:tasks.workunit.client.0.teuth9.stderr:+ '[' 1 -ne 1 ']'
2015-07-24T08:14:12.225 INFO:tasks.workunit.client.0.teuth9.stderr:+ local extsize
2015-07-24T08:14:12.225 INFO:tasks.workunit.client.0.teuth9.stderr:++ sudo xfs_io -c extsize /var/lib/ceph/osd/ceph-0/current/1.6_head/foo__head_7FC1F406__1
2015-07-24T08:14:12.304 INFO:tasks.workunit.client.0.teuth9.stderr:foreign file active, extsize command is for XFS filesystems only
2015-07-24T08:14:12.310 INFO:tasks.workunit.client.0.teuth9.stderr:+ extsize=
2015-07-24T08:14:12.311 INFO:tasks.workunit.client.0.teuth9.stderr:+ local 'extsize_regex=^\[(.*)\] /var/lib/ceph/osd/ceph-0/current/1.6_head/foo__head_7FC1F406__1$'
2015-07-24T08:14:12.311 INFO:tasks.workunit.client.0.teuth9.stderr:+ [[ ! '' =~ ^\[(.*)\] /var/lib/ceph/osd/ceph-0/current/1.6_head/foo__head_7FC1F406__1$ ]]
2015-07-24T08:14:12.315 INFO:tasks.workunit.client.0.teuth9.stderr:+ echo 'extsize doesn'\''t match extsize_regex: '
2015-07-24T08:14:12.316 INFO:tasks.workunit.client.0.teuth9.stderr:extsize doesn't match extsize_regex:
2015-07-24T08:14:12.317 INFO:tasks.workunit.client.0.teuth9.stderr:+ return 2
2015-07-24T08:14:12.318 INFO:tasks.workunit:Stopping ['rados/test_alloc_hint.sh'] on client.0...

ceph_test_async_driver fail

2015-07-09T06:46:32.044 INFO:teuthology.orchestra.run.teuth6.stdout:[----------] Global test environment tear-down
2015-07-09T06:46:32.044 INFO:teuthology.orchestra.run.teuth6.stdout:[==========] 5 tests from 2 test cases ran. (5122 ms total)
2015-07-09T06:46:32.045 INFO:teuthology.orchestra.run.teuth6.stdout:[  PASSED  ] 1 test.
2015-07-09T06:46:32.046 INFO:teuthology.orchestra.run.teuth6.stdout:[  FAILED  ] 4 tests, listed below:
2015-07-09T06:46:32.047 INFO:teuthology.orchestra.run.teuth6.stdout:[  FAILED  ] EventCenterTest.FileEventExpansion
2015-07-09T06:46:32.048 INFO:teuthology.orchestra.run.teuth6.stdout:[  FAILED  ] AsyncMessenger/EventDriverTest.PipeTest/1, where GetParam() = "select"
2015-07-09T06:46:32.049 INFO:teuthology.orchestra.run.teuth6.stdout:[  FAILED  ] AsyncMessenger/EventDriverTest.NetworkSocketTest/0, where GetParam() = "epoll"
2015-07-09T06:46:32.050 INFO:teuthology.orchestra.run.teuth6.stdout:[  FAILED  ] AsyncMessenger/EventDriverTest.NetworkSocketTest/1, where GetParam() = "select"
2015-07-09T06:46:32.051 INFO:teuthology.orchestra.run.teuth6.stdout:
2015-07-09T06:46:32.051 INFO:teuthology.orchestra.run.teuth6.stdout: 4 FAILED TESTS
2015-07-09T06:46:32.053 INFO:teuthology.orchestra.run.teuth6.stderr:SetUp start set up select
2015-07-09T06:46:32.053 INFO:teuthology.orchestra.run.teuth6.stderr:2015-07-09 06:46:32.028267 3ff925c95f0 -1 EpollDriver.init unable to do epoll_create: (24) Too many open files
2015-07-09T06:46:32.107 ERROR:teuthology.run_tasks:Saw exception from tasks.

Others

cluster [WRN] ... slow requests

  • Status
  • Affected Tests
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/ceph-fuse.yaml tasks/workunit_suites_blogbench.yaml thrash/exports.yaml}
    • multimds:libcephfs/{ceph/base.yaml clusters/9-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/libcephfs_java.yaml}
    • smoke:basic/{clusters/fixed-3-cephfs.yaml fs/btrfs.yaml tasks/rados_api_tests.yaml}
  • Sample Output

"2015-07-07 19:22:08.908044 osd.0 10.20.100.2:6800/15640 101 : cluster
    [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.864058
    secs" in cluster log

cluster [WRN] OSD near full (90%)

  • Status
  • Affected Tests
    • multimds:basic/{ceph/base.yaml clusters/3-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml mount/cfuse.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/misc.yaml}
    • multimds:basic/{ceph/base.yaml clusters/9-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml mount/cfuse.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/misc.yaml}
  • Sample Output

"2015-07-19 00:37:19.089980 osd.5 10.100.0.54:6808/15204 1 : cluster [WRN]
    OSD near full (90%)" in cluster log

Administratively prohibited

  • Status
  • Affected Tests
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/kclient.yaml tasks/workunit_suites_truncate_delay.yaml thrash/exports.yaml}
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/ceph-fuse.yaml tasks/workunit_suites_truncate_delay.yaml thrash/exports.yaml}
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/ceph-fuse.yaml tasks/workunit_suites_truncate_delay.yaml thrash/normal.yaml}
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/kclient.yaml tasks/workunit_suites_dbench.yaml thrash/normal.yaml}
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/kclient.yaml tasks/workunit_suites_truncate_delay.yaml thrash/normal.yaml}
  • Sample Output

fs:standbyreplay

  • Status
  • Affected Tests
    • fs:standbyreplay/{clusters/standby-replay.yaml mount/fuse.yaml tasks/migration.yaml}
  • Sample Output


Hangs

Storage Space Depleted

reached critical levels of available space on local monitor storage -- shutdown!

  • Status
  • Affected Tests
    • fs:basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_misc.yaml}
    • fs:basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_misc.yaml}
    • krbd:rbd-nomount/{clusters/fixed-3.yaml conf.yaml fs/btrfs.yaml install/ceph.yaml msgr-failures/few.yaml tasks/rbd_simple_big.yaml}
    • krbd:rbd-nomount/{clusters/fixed-3.yaml conf.yaml fs/btrfs.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_simple_big.yaml}
  • Sample Output

015-07-20T23:05:40.301 INFO:tasks.ceph.mon.c.msteuth10.stderr:2015-07-21 03:05:40.301637 7fb19a60a0 -1 mon.c@2(peon).data_health(8) reached critical levels of available space on local monitor storage -- shutdown!
2015-07-20T23:05:40.314 INFO:tasks.ceph.mon.c.msteuth10.stderr:2015-07-21 03:05:40.314479 7faf7a60a0 -1 mon.c@2(peon) e1 *** Got Signal Interrupt ***
2015-07-20T23:05:48.247 INFO:tasks.ceph.mon.b.msteuth10.stderr:2015-07-21 03:05:48.245499 7fb36120a0 -1 mon.b@0(leader).data_health(8) reached critical levels of available space on local monitor storage -- shutdown!
2015-07-20T23:05:48.249 INFO:tasks.ceph.mon.b.msteuth10.stderr:2015-07-21 03:05:48.246157 7fb14120a0 -1 mon.b@0(leader) e1 *** Got Signal Interrupt ***

problem writing to /var/log/ceph/ceph-osd.4.log: (28) No space left on device

  • Status
  • Affected Tests
    • multimds:basic/{ceph/base.yaml clusters/3-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml mount/kclient.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/misc.yaml}
  • Sample Output

cluster [ERR] OSD full dropping all updates 100% full

  • Status
  • Affected Tests
    • fs:recovery/{clusters/2-remote-clients.yaml debug/mds_client.yaml mounts/ceph-fuse.yaml tasks/mds-full.yaml}
  • Sample Output

"2015-04-16 18:36:37.720876 osd.0 10.20.200.1:6800/2236 1 : cluster [ERR]
    OSD full dropping all updates 100% full" in cluster log

[Errno 28] No space left on device

  • Status
  • Affected Tests
    • kcephfs:cephfs/{clusters/fixed-3-cephfs.yaml conf.yaml fs/btrfs.yaml inline/yes.yaml tasks/kclient_workunit_misc.yaml}
  • Sample Output

Failed Assertions

FAILED assert(_head.empty())

  • Status
  • Affected Tests
    • multimds:basic/{ceph/base.yaml clusters/3-mds.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml mount/kclient.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/kernel_untar_build.yaml}
  • Sample Output

FAILED assert(r == 0)

  • Status
  • Affected Tests
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/ceph-fuse.yaml tasks/workunit_suites_pjd.yaml thrash/exports.yaml}
    • marginal:multimds/{clusters/3-node-3-mds.yaml fs/btrfs.yaml mounts/kclient.yaml tasks/workunit_suites_pjd.yaml thrash/exports.yaml}
  • Sample Output

FAILED assert(sub_info.rbytes == fnode.rstat.rbytes)

FAILED assert(it != import_state.end())

Other

ConnectionLostError: SSH connection to msteuth14 was lost

  • Status
  • Affected Tests
    • marginal:multimds/{clusters/3-node-3-mds.yaml fs/btrfs.yaml mounts/kclient.yaml tasks/workunit_misc.yaml thrash/exports.yaml}
  • Sample Output

2015-07-29T19:53:21.686 ERROR:teuthology.parallel:Exception in parallel execution
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 82, in __exit__
    for result in self:
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/ceph-qa-suite_no_valgrind/tasks/workunit.py", line 372, in _run_tests
    'rm', '-rf', '--', '{tdir}/workunits.list.{role}'.format(tdir=testdir, role=role), srcdir,
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 137, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 333, in run
    raise ConnectionLostError(command=quote(args), node=name)
ConnectionLostError: SSH connection to msteuth14 was lost: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.1 /home/ubuntu/cephtest/workunit.client.1'

No progess

  • Status
  • Affected Tests
    • marginal:multimds/{clusters/3-node-9-mds.yaml fs/btrfs.yaml mounts/ceph-fuse.yaml tasks/workunit_misc.yaml thrash/exports.yaml}
  • Sample Output


LEG/Engineering/Storage/Ceph/ceph-qa-suite-failures (last modified 2015-07-31 13:03:12)