To generate a high-quality HDR image, it is very important to restore the saturated irradiance information. To effectively restore saturated pixels, this paper proposes a method of generating an HDR image by combining the feature maps that made the input image brighter and darker, respectively. In addition, a loss function is proposed to focus on restoring the over-and under-exposed region with very high and low pixel values. Through the proposed loss function, the network can be focused on saturated pixel restoration during training. Compared to other methods, the proposed method showed an average of 9.1% higher results for HDR-visual difference predictor (VDP) and 46.7% higher results for SSIM than other methods.