Hadoop Distributed File System (HDFS) is designed to store and manage large files. It stores the file system metadata in the NameNode’s memory for high performance. Since there is a single NameNode in HDFS cluster, it suffers from high memory usage of the NameNode when processing a massive number of small files. This problem is referred as ‘the small file problem’. The small file problem occurs because HDFS stores each small file on a separate storage block in the DataNode and maintains an individual metadata on the NameNode. Researchers suggested merging small files to large file with a size of one HDFS block to reduces the memory usage of the NameNode. They considered HDFS with contiguous block layout to solve the small file problem. However, a striped block layout has been adopted by Hadoop when Erasure Coding is enabled. In this block layout, a file is divided into smaller parts of 1MB size and distributed across multiple storage blocks of a DataNode. As a result, it creates a possibility to further reduce the memory usage of the small files by increasing the merged file size to fully fill the multiple storage blocks while maintaining the same size of metadata. Therefore, we propose a new scheme for solving the small file problem that further reduces the memory usage of the NameNode by considering a striped block layout. However, it brings a new challenge as it needs a novel file indexing and file extracting methods to access the small files. We introduced a program named Small File Processor (SFP) which performs file merging and indexing on small files. A file extracting algorithm has been implemented to read the small files from their corresponding merged file. The experiment result shows that the proposed scheme reduces the memory usage of NameNode and improves the write access speed of the small files compared to the default HDFS with Erasure Coding.