Digital Twin (DT) technology, as a promising technology, can achieve the vehicular contexts mapping of the virtual world and physical world in a collaborative autonomous driving (CAD) system. DT technology is developed based on C-V2X, 6G, Mobile Edge Computing (MEC), Machine Learning (ML) and other technologies, which can enable the creation of robust and reliable digital twin-based collaborative autonomous driving architectures, providing a platform for testing, validating, and refining autonomous driving systems in a highly efficient and safe manner. However, the future vehicular system needs greater real-time processing and resource collaboration capability for autonomous vehicles (AVs). In this paper, we present a digital twin (DT)-based vehicular context offloading three-layer architecture to provide better resource management of AVs. To improve the Quality of Service (QoS) and reduce the processing latency, a Deep Reinforcement Learning and Mean Field Game method (\method) are proposed, where the dynamic and real-time interaction between AVs is approximated as a mean-field gaming process in DT resource allocation. The CARLA simulation demonstrates our proposed algorithm significantly reduces the task offloading latency, and improves the average rewards by 28.5\%, 3.5\%, and 6.8\%, compared with traditional DDPG, TD3, and AC, respectively.