Multi-Objective Optimization (MOO) has a wide range of applications in engineering, finance, and environmental management, but its computational complexity and high-dimensional search space limit its efficiency. In recent years, the rapid development of machine learning (ML) technology has provided new solutions for multi-objective optimization. This project aims to explore how to enhance the performance and efficiency of multi-objective optimization using Transformer architecture, Foundation Model, and Deep Transfer Learning technologies. Firstly, the Transformer architecture can effectively capture the complex dependencies in high-dimensional data through its self-attention mechanism, which is suitable for objective function modeling and constraint processing in multi-objective optimization. Secondly, as a pre-trained model, Foundation Model can learn general feature representations from large-scale data, which significantly improves the generalization ability of optimization algorithms, especially in small-sample data scenarios. Finally, deep transfer learning technology can reduce the dependence on target domain data by transferring the knowledge of the pre-trained model to a specific optimization task, thereby accelerating the optimization process and improving the accuracy of the solution.