Uploaded image for project: 'Apache Arrow'
  1. Apache Arrow
  2. ARROW-7805

[C++][Python] HDFS Remove (rm) operation defaults to SkipTrash

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Won't Fix
    • 0.13.0
    • None
    • C++, Python

    Description

      The Pyarrow lib (using LIBHDFS) appears to default to a SkipTrash option (which is not the Hadoop default behavior).  This turned out to be a pretty major issue for a recent project.  The HadoopFileSystem `delete` method currently has a default behavior of `recursive=False` I would believe a similar `skipTrash=False` default would be appropriate here.

      Or, if this is not possible it may be appropriate and best to print a `WARNING` or something to the console to warn users that this is a point-of-no-return.

      The 2 below tests show the difference in default behaviors between `hadoop fs` shell commands and the default `pyarrow` behavior.

       

      # test using hadoop fs shell commands 
      
      # setup test & confirm that file exists 
      $ testfile="/user/myusername/testfile1" 
      $ hadoop fs -touchz $testfile && hadoop fs -ls $testfile 
      -rw-r----- 3 myusername mygroup 0 2020-02-08 13:25 /user/myusername/testfile1 
      
      # remove the file and confirm that it is moved to the Trash 
      $ hadoop fs -rm $testfile 
      20/02/08 13:26:04 INFO fs.TrashPolicyDefault: Moved: 'hdfs://nameservice1/user/myusername/testfile1' to trash at: hdfs://nameservice1/user/.Trash/myusername/Current/user/myusername/testfile1 
      
      # verify that it is in the Trash 
      $ hadoop fs -ls /user/.Trash/myusername/Current/user/myusername/testfile1 
      -rw-r----- 3 myusername mygroup 0 2020-02-08 13:25 /user/.Trash/myusername/Current/user/myusername/testfile1
      
      
       # test using pyarrow
      import os
      import subprocess
      
      from app import conf
      
      LIBHDFS_PATH = conf["libhdfs_path"]
      os.environ["ARROW_LIBHDFS_DIR"] = LIBHDFS_PATH
      import pyarrow
      
      TEST_FILE = 'testfile2'
      TEST_FILE_PATH = f'/user/myusername/{TEST_FILE}'
      TRASH_FILE_PATH = f'/user/.Trash/myusername/Current/user/myusername/{TEST_FILE}'
      
      fs = pyarrow.hdfs.connect(driver="libhdfs")
      
      def setup_test():
          """Create the testfile"""
          print('create test file...')
          subprocess.run(f'hadoop fs -touchz {TEST_FILE_PATH}'.split())
      
      def run_test():
          """run the removal, try to remove the file and verify if it's moved to the Trash"""
          setup_test()
          try:
              print(f'check if test file {TEST_FILE} exists: {bool(fs.ls(TEST_FILE))}')
              print(f'attempt to remove: {TEST_FILE}')
              fs.rm(TEST_FILE)
              print(f'file {TEST_FILE} removed successfully')
          except:
              print('encountered an error in run_test')
      
      def check_file_in_hdfs_trash():
          try:
              fs.ls(TRASH_FILE_PATH)
          except:
              print(f'test file {TEST_FILE} not found in {TRASH_FILE_PATH}!!')
      
      run_test()
      check_file_in_hdfs_trash()
      
      # output...
      create test file...
      check if test file testfile2 exists: True
      attempt to remove: testfile2
      file testfile2 removed successfully
      test file testfile2 not found in /user/.Trash/myusername/Current/user/myusername/testfile2!!
      
      

       

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            bbapache bb
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: