Ä¿¹Â´ÏƼ

¾ÆÁÖ´ëÇб³ ¼ÒÇÁÆ®¿þ¾îÇаú¿¡ ¿À½Å °ÍÀ» ȯ¿µÇÕ´Ï´Ù.

Çаú°øÁö»çÇ×

[2024.04.29(¿ù)] Artificial Intelligence & AI Convergence Network Colloquium °³ÃÖ ¾È³»
  • ±Û¾´ÀÌ °ü¸®ÀÚ
  • ÀÛ¼ºÀÏ 2024-04-23 16:50:32
  • Á¶È¸¼ö 128

Artificial Intelligence & AI Convergence Network »ç¾÷´Ü¿¡¼­ °øµ¿À¸·Î 

°³ÃÖÇÏ´Â ColloquiumÀ» 4¿ù 29ÀÏ(¿ù) ¿ÀÈÄ 3½Ã¿¡ °³ÃÖÇÏ¿À´Ï ¸¹Àº Âü¿© ºÎŹµå¸³´Ï´Ù.


¢º️  When : 2024³â 4¿ù 29ÀÏ(¿ù) ¿ÀÈÄ 3½Ã

¢º️  Where : ÆÈ´Þ°ü 407È£ 
¢º️  Speaker : °­º´°ï ±³¼ö(SUNY Korea)
¢º️  Title : On the efficiency of pre-trained word embeddings on Transformers

¢º️  Abstract : In this talk, I will discuss a counter-intuitive phenomenon where state-of-the-art pretrained word embedding vectors perform poorly compared to randomly initialized vectors.
More specifically, such a tendency is observed almost exclusively in Transformer architectures, making it a problem to most modern NLP systems that are heavily dependent on Transformers. 
I also describe a very simple remedy that somewhat alleviates this shortcoming, as well as numerous empirical results to back this claim.
If time permits, I will also share a failure case of another popular deep learning system (autograd), leading us to the discussion of whether we should take prevalent technologies for granted in this field.

¢º️  Bio : Byungkon Kang got his PhD at KAIST in 2013 before joining Samsung Advanced Institute of Technology. After that, he spent a few years at Ajou University as a research professor.He is currently an assistant professor at SUNY Korea, working on various AI/ML topics. 

¢º️  Host : ¼ÒÇÁÆ®¿þ¾îÇаú ¼Õ°æ¾Æ ±³¼ö(kasohn@ajou.ac.kr)

ÄÝ·ÎÅ°¿ò Æ÷½ºÅÍ.png
¸ñ·Ï





ÀÌÀü±Û [ÇöÀå½Ç½À] 2024-ÇÏ°è ÇöÀå½Ç½À »çÀü ¼ö¿äÁ¶»ç(~5/12)
´ÙÀ½±Û [ÇкÎ] Àü°ø(º¹¼ö,ºÎ,¸¶ÀÌÅ©·Î,¿¬°è,½ÉÈ­Àü°ø) ½Åû ¹× Ãë¼Ò ±â°£ ¾È³» (½Åû ±â°£: 5...