Ä¿¹Â´ÏƼ

¾ÆÁÖ´ëÇб³ ¼ÒÇÁÆ®¿þ¾îÇаú¿¡ ¿À½Å °ÍÀ» ȯ¿µÇÕ´Ï´Ù.

Çаú°øÁö»çÇ×

[2023.01.20(±Ý)] Artificial Intelligence Colloquium °³ÃÖ ¾È³»
  • ±Û¾´ÀÌ °ü¸®ÀÚ
  • ÀÛ¼ºÀÏ 2023-01-16 13:28:15
  • Á¶È¸¼ö 192
´ëÇпø ÀΰøÁö´ÉÇаú & 4´Ü°è BK21 Ajou DREAM ÀΰøÁö´É Çõ½ÅÀÎÀç ¾ç¼º»ç¾÷´Ü¿¡¼­´Â
 Artificial Intelligence Colloquium À»  1¿ù 20ÀÏ(±Ý) ¿ÀÀü 11½Ã¿¡ °³ÃÖÇÏ°íÀÚ ÇÕ´Ï´Ù.
°ü½ÉÀÖÀ¸½Ã¸é ¸¹Àº Âü¼® ºÎŹµå¸³´Ï´Ù.

¢º When : 2023.1.20(±Ý) ¿ÀÀü 11½Ã~
¢º Where : ÆÈ´Þ°ü 407È£
¢º Speaker : ¹ÚÁ¾¼¼(KAIST ±³¼ö)
¢º Title : Algorithm-Hardware-Software Co-Design to Build Specialized Systems

¢º  Abstract : Modern retrospective analytics systems leverage cascade architecture to mitigate bottleneck for computing deep neural networks (DNNs). However, the existing cascades suffer from two limitations: (1) decoding bottleneck is either neglected or circumvented, paying significant compute and storage cost for pre-processing; and (2) the systems are specialized for temporal queries and lack spatial query support. In this talk, I will present CoVA, a novel cascade architecture that splits the cascade computation between compressed domain and pixel domain to address the decoding bottleneck, supporting both temporal and spatial queries. CoVA cascades analysis into three major stages where the first two stages are performed in compressed domain, while the last one in pixel domain. First, CoVA detects occurrences of moving objects (called blobs) over a set of compressed frames (called tracks). Then, using the track results, CoVA prudently selects a minimal set of frames to obtain the label information and only decode them to compute the full DNNs, alleviating the decoding bottleneck. Lastly, CoVA associates tracks with labels to produce the final analysis results on which users can process both temporal and spatial queries.

¢º Bio : Jongse Park is an assistant professor in the School of Computing at Korea Advanced Institute of Science and Technology (KAIST). He is also co-leading the Computer Architecture and Systems Laboratory (CASYS). His research focuses on building hardware-software co-designed systems for emerging algorithms and applications. He has a PhD in computer science from Georgia Institute of Technology. Before joining KAIST, he was a deep learning acceleration solution architect at Bigstream Solutions Inc. He was a guest editor IEEE Micro Special Issue on Machine Learning Acceleration, and has served on program committees of top international conferences, including ISCA and DAC.
   
¢º  Host : ¼ÒÇÁÆ®¿þ¾îÇаú ¾ÈÁ¤¼· ±³¼ö(jsahn@ajou.ac.kr)
¸ñ·Ï





ÀÌÀü±Û [¢ß¾ÆÀ̺귦] 2023³â»ó¹Ý±âº´¿ªÆ¯·ÊÀü¹®¿¬±¸¿ä¿ø(½Å±ÔÆíÀÔ¹×ÀüÁ÷)¸ðÁý °ø°í
´ÙÀ½±Û ¿ÀÅä³ë¸Ó½º¿¡ÀÌÅõÁö 22µ¿°è ÇöÀå½Ç½À ¸ðÁý