도파민 수용기가 유전적으로 조작된 생쥐의 변화된 의사결정행동

DC Field Value Language
dc.contributor.advisor김병곤, 정민환-
dc.contributor.authorKwak, Shin Ae-
dc.date.accessioned2019-10-21T07:25:00Z-
dc.date.available2019-10-21T07:25:00Z-
dc.date.issued2015-02-
dc.identifier.other18847-
dc.identifier.urihttps://dspace.ajou.ac.kr/handle/2018.oak/18661-
dc.description학위논문(박사)--아주대학교 일반대학원 :의생명과학과,2015. 2-
dc.description.tableofcontentsI. INTRODUCTION 1 A. Reinforcement Learning 2 1. Dilemma of exploration and exploitation 4 2. Q-learning model as a model-free method 6 B. The dopaminergic system 7 1. Features of the dopamine system 7 2. Dopamine-mediating learning 14 C. Aim of the thesis 15 II. MATERIALS AND METHODS 18 A. Genetic information of subjects 18 B. Apparatus 19 C. Behavioral task 19 1. Reversal learning task 21 2. TAB task 21 D. Experimental groups 24 E. Analysis 27 1. Logistic regression 27 2. Computational models 28 3. Statistics 31 III. RESULTS 32 A. Behavioral performance 32 1. Reversal task 32 2. TAB task 35 B. Logistic regression analysis 39 C. Modeling 41 IV. DISCUSSION 54 A. Motor deficits in both D1R- and D2R-KO mice 54 B. Role of D2R in rapid adjustment of choice behavior 55 C. Learning from RPE 56 D. Value-dependent action selection 57 E. Win-stay 58 F. Uncertainty-driven exploration 59 G. Multiple roles of dopamine in reward effects 59 H. Model-free vs. Model-based RL 60 I. Future directions 61 V. CONCLUSION 63-
dc.language.isoeng-
dc.publisherThe Graduate School, Ajou University-
dc.rights아주대학교 논문은 저작권에 의해 보호받습니다.-
dc.title도파민 수용기가 유전적으로 조작된 생쥐의 변화된 의사결정행동-
dc.title.alternativeShinae Kwak-
dc.typeThesis-
dc.contributor.affiliation아주대학교 일반대학원-
dc.contributor.alternativeNameShinae Kwak-
dc.contributor.department일반대학원 의생명과학과-
dc.date.awarded2015. 2-
dc.description.degreeDoctoral-
dc.identifier.localId695776-
dc.identifier.urlhttp://dcoll.ajou.ac.kr:9080/dcollection/jsp/common/DcLoOrgPer.jsp?sItemId=000000018847-
dc.subject.keyword도파민-
dc.subject.keyword강화학습-
dc.description.alternativeAbstractDopamine has been thought to play an important role in updating values according to reward prediction error by reinforcement learning theory, since the finding that phasic activity of midbrain dopamine neurons signals the difference between actual and predicted outcomes (reward prediction error). However, the extent and nature of dopamine roles in reward-based learning are still under debate. Specific roles of different dopamine receptor subtypes in this process are also unknown. To investigate roles of dopamine receptor subtypes in reward-based learning, I examined choice behavior of dopamine D1 and D2 receptor-knockout (D1R-KO and D2R-KO, respectively) mice in an instrumental learning task with progressively increasing reversal frequency and in a dynamic foraging task. Performance of D2R-KO mice was progressively impaired in an instrumental learning task as the frequency of reversal increased and profoundly impaired in a dynamic foraging task even with prolonged training, whereas D1R-KO mice showed only minor deficits in performance. Animals’ choice behavior in the dynamic foraging was better explained by hybrid model that included win-stay-lose-switch and reinforcement learning terms than by simple reinforcement learning alone. A hybrid model-based analysis revealed that D1R-KO mice showed the increased win-stay and uncertainty-based exploration, and D2R-KO mice also showed the increased win-stay, but at the same time, showed the impaired value updating and increased randomness in action selection which were detrimental to maximizing rewards in the dynamic foraging task. These results indicate that dopamine D2 receptors rather than D1 receptors are important in learning from past choice outcomes for optimizing choice strategy in a dynamic and uncertain environment.-
Appears in Collections:
Graduate School of Ajou University > Department of Biomedical Sciences > 4. Theses(Ph.D)
Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse