Preventing gender bias is one of the most critical issues to reduce discrimination in recruitment process. AI recruitment system, whose decision is expected to be more objective than that of human, was applied for screening applicants with the aim of fairer assessment, but recent cases reported that AI recruiters reproduces the gender bias of human recruiters. From such practical evidence, previous studies attempted to establish the AI recruitment system that overcome bias, with the assumption in which AI which learns human decision succeeds the human bias. However, this study criticized that the succession of bias is wrongly understood in that the mechanisms of AI decision-making and human decision-making are different. The experiment conducted in this study showed that screening decision of human becomes fairer as the applicant’s gender becomes explicit, while screening decision of AI becomes less fair in the same condition. Although the decision of human recruiters is biased due to their stereotypes, their natural moral sense leads them to consciously resist discrimination when they clearly recognize the applicant’s gender. AI recruiters, in contrast, do not have neither stereotypes nor natural moral sense, so that they tend to be overfitted on the gender feature when they clearly recognize the applicant’s gender. This study presented an inductive counterexample to the deductive proposition that AI succeeds human bias as it is, and proposed the possibility that applying AI recruitment system on the applicant screening process can reduce gender bias. In the structure where human recruiters make decisions by referring to AI recruiter’s decision, AI may reduce the human recruiter’s dependency on their stereotypes. Additionally, the result implies that the importance of blind recruitment may be increased as the AI recruitment process becomes more general, regarding that AI tend to be overfitted on the specified applicant’s gender feature.