Federated Learning (FL) protects data privacy by sharing gradients across clients rather than local training data. It has always been a hot research issue to motivate users to actively contribute local data and participate in the federated learning aggregation process. This paper proposes a novel Mean-Field-Game-based Federated Learning incentive mechanism. We first model the process of federated learning aggregation as a mean-field game problem across clients. We then design a mean-field federated learning gradient calculation algorithm based on stochastic differential equations, i.e., HJB and FPK equations. We build an efficient client reputation-aware incentive mechanism that improves global learning performance by comparing the cosine similarity of the obtained mean-field and individual FL gradients. Finally, experimental results show that our incentive mechanism outperforms the baseline algorithms in FL learning performance.