The emergence of light detection and ranging (LiDAR) technologies has resulted in collection of massive amount of 3-dimentsion (3D) point-cloud data. The traditional interpolation approaches require extensive computational resources and long runtime to create digital elevation model (DEM) from these LiDAR data. Recent advances in high-performance computing have enabled researchers to develop parallel approaches. Several parallel implementation methods have been proposed for processing massive amounts of point-cloud data acquired from the LiDAR technology. However, such parallelization strategies demand scheduling, load-balancing, heavy communication and I/O operations which is different from the traditional approaches. In this dissertation, we present a customized data distribution scheme for massive LIDAR data processing and also developed a parallel point-cloud processing (3P) tool to support our scheme. The customized data distribution scheme allocates dynamic grid size for processes in parallel environment based upon the characteristics of point-cloud data. This scheme could enhance the performance of the parallel processing tool by providing a better load balancing among processes. The 3P tool is implemented based on the message passing interface libraries and supports two popular interpolation algorithms: inverse distance weighting and kriging. 3P also provides an interactive graphical user interface mode that encapsulate the complexity of the parallelized interpolation methods. We evaluated the proposed scheme on our 3P tool on two computing environments(x86 and ARM) using various point-cloud datasets. The results demonstrate that the proposed data distribution scheme have better load balancing that the static approach and provides efficient performance speedup.