Federated Learning (FL) is a novel learning paradigm that trains a model cooperatively. In FL, participants can train the model without sharing their data. However, it means the model is vulnerable to backdoor attack. Through the backdoor attack, the attacker can manipulate the output of the model with certain features. To address the problem, backdoor defense methods have been studied. In this paper, we introduce and analyze the studies. Moreover, future research issues are presented at the end of the paper.