As data privacy becomes increasingly important, federated learning is a technique for training deep learning models while ensuring the data privacy of devices. Because federated learning updates the global model through a centralized server, it is challenging to reduce communication overhead. It is also challenging to consider the system heterogeneity of devices. In this paper, we present a new architecture for federated learning with a serverless parameter server. Because gradients and global model updates in federated learning are event-driven, a serverless environment can be utilized to decouple the synchronization process of the parameter server from the device’s model communication process. In addition, we propose AMBLE, which adaptively adjusts the local mini-batch and local epoch size for heterogeneous devices in federated learning, synchronously updating the parameters. Our proposed scheme, AMBLE, can increase computation during the waiting time caused by stragglers and can scale the local learning rate to improve the model convergence rate and accuracy. We confirm that federated learning with AMBLE can be stably trained with a faster convergence speed and higher accuracy than FedAvg, in both the non-IID and IID cases.