AMBLE: Adjusting Mini-Batch and Local Epoch Adaptively for Federated Learning with Serverless Parameter Server
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 오상윤 | - |
dc.contributor.author | 박주원 | - |
dc.date.accessioned | 2022-11-29T02:32:51Z | - |
dc.date.available | 2022-11-29T02:32:51Z | - |
dc.date.issued | 2021-08 | - |
dc.identifier.other | 31174 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/handle/2018.oak/20400 | - |
dc.description | 학위논문(석사)--아주대학교 일반대학원 :인공지능학과,2021. 8 | - |
dc.description.tableofcontents | I INTRODUCTION 1 A. BACKGROUND AND MOTIVATION 1 B. PROBLEM STATEMENT AND CONTRIBUTION 4 II BACKGROUND AND RELATED WORKS 7 A. BACKGROUND 7 1 Federated Learning 7 2 Serverless Computing 10 B. RELATED WORKS 13 1 Local Mini-batch SGD 13 2 Adaptive Batch Size Approach 17 III AMBLE WITH SERVERLESS PARAMETER SERVER 20 A. ARCHITECTURE OF SERVERLESS PARAMETER SERVER 20 B. PROCESS OF AMBLE IN FEDERATED LEARNING 22 IV EXPERIMENT AND EVALUATION 27 A. EXPERIMENT SETUP 27 1 Experiment environment 27 B. EXPERIMENT RESULT 29 1 MNIST with IID Case 29 2 MNIST with Non-IID Case 32 3 CIFAR-10 with IID case 34 4 CIFAR-10 with Non-IID case 35 5 Reusing Serverless Instance 36 V CONCLUSION AND FUTURE WORKS 38 REFERENCE 39 | - |
dc.language.iso | eng | - |
dc.publisher | The Graduate School, Ajou University | - |
dc.rights | 아주대학교 논문은 저작권에 의해 보호받습니다. | - |
dc.title | AMBLE: Adjusting Mini-Batch and Local Epoch Adaptively for Federated Learning with Serverless Parameter Server | - |
dc.title.alternative | AMBLE: 서버리스 파라미터 서버 기반의 연합학습을 위한 적응형 Mini-Batch 및 Local Epoch 조정 기법 | - |
dc.type | Thesis | - |
dc.contributor.affiliation | 아주대학교 일반대학원 | - |
dc.contributor.alternativeName | Juwon Park | - |
dc.contributor.department | 일반대학원 인공지능학과 | - |
dc.date.awarded | 2021. 8 | - |
dc.description.degree | Master | - |
dc.identifier.localId | 1227947 | - |
dc.identifier.uci | I804:41038-000000031174 | - |
dc.identifier.url | https://dcoll.ajou.ac.kr/dcollection/common/orgView/000000031174 | - |
dc.subject.keyword | Federated Learning | - |
dc.subject.keyword | Local SGD | - |
dc.subject.keyword | Serverless Computing | - |
dc.subject.keyword | System Heterogeneity | - |
dc.description.alternativeAbstract | As data privacy becomes increasingly important, federated learning is a technique for training deep learning models while ensuring the data privacy of devices. Because federated learning updates the global model through a centralized server, it is challenging to reduce communication overhead. It is also challenging to consider the system heterogeneity of devices. In this paper, we present a new architecture for federated learning with a serverless parameter server. Because gradients and global model updates in federated learning are event-driven, a serverless environment can be utilized to decouple the synchronization process of the parameter server from the device’s model communication process. In addition, we propose AMBLE, which adaptively adjusts the local mini-batch and local epoch size for heterogeneous devices in federated learning, synchronously updating the parameters. Our proposed scheme, AMBLE, can increase computation during the waiting time caused by stragglers and can scale the local learning rate to improve the model convergence rate and accuracy. We confirm that federated learning with AMBLE can be stably trained with a faster convergence speed and higher accuracy than FedAvg, in both the non-IID and IID cases. | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.