Focus-directed Video Stylization from RGBD

DC Field Value Language
dc.contributor.advisorKim Dong Yoon-
dc.contributor.authorXu, Yinhong-
dc.date.accessioned2018-11-08T08:17:47Z-
dc.date.available2018-11-08T08:17:47Z-
dc.date.issued2014-02-
dc.identifier.other16176-
dc.identifier.urihttps://dspace.ajou.ac.kr/handle/2018.oak/12504-
dc.description학위논문(석사)--아주대학교 일반대학원 :컴퓨터공학과,2014. 2-
dc.description.tableofcontentsⅠ. Introduction 1 Ⅱ. Related Work 3 Ⅲ. Algorithm 5 1. Algorithm overview 5 2. Line drawing from RGBD 5 3. Trilateral filter 7 4. Adaptive trilateral filter 9 5. Focus-directed framework 11 6. Focus point tracking 17 Ⅳ. Experiment and Results 20 Ⅴ.Conclusion and Future Work 24 REFERENCES 26-
dc.language.isoeng-
dc.publisherThe Graduate School, Ajou University-
dc.rights아주대학교 논문은 저작권에 의해 보호받습니다.-
dc.titleFocus-directed Video Stylization from RGBD-
dc.typeThesis-
dc.contributor.affiliation아주대학교 일반대학원-
dc.contributor.department일반대학원 컴퓨터공학과-
dc.date.awarded2014. 2-
dc.description.degreeMaster-
dc.identifier.localId608197-
dc.identifier.urlhttp://dcoll.ajou.ac.kr:9080/dcollection/jsp/common/DcLoOrgPer.jsp?sItemId=000000016176-
dc.subject.keywordComputer Graphics-
dc.subject.keywordComputer Vision-
dc.description.alternativeAbstractThis thesis presents a non-photorealistic rendering technique to create focus-directed stylization using RGBD data, which contains not only RGB color information but also depth values. This thesis applies depth data into video stylization. Our method consists of four parts, line drawing, image abstraction, focus-directed framework and focus point tracking. In the first part, our approach tries to extract more lines from depth as well as RGB color information. For the second part, we develop an adaptive trilateral filter method to produce detail preserving stylized images that show clear depth cue. For the third part, a focus-directed framework is designed to produce a DOF-like effect based on a focus point which is selected by a user. Furthermore, the focus point is tracked through the video to get temporal coherence.-
Appears in Collections:
Graduate School of Ajou University > Department of Computer Engineering > 3. Theses(Master)
Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse