Analyzing brain signals and reconstructing visual stimuli from the brain can facilitate further exploration on cognitive functions of the human brain, which have attracted strong interest in neuroscience and artificial intelligence. However, due to defects such as complex noises and the lack of alignment accuracy, efficient methods for extracting information from Electroencephalogram (EEG) signals are still very limited, making it difficult to perform EEG visual decoding tasks. Our study shows a way to handle the issues by proposing a new method for EEG representation learning and visual decoding, thus completing end-to-end image reconstruction tasks from EEG signals. We utilize the ability of semantic extraction and prediction of large language models (LLMs) to enhance the performance of EEG feature extraction. For semantic representation learning, we align EEG signals with target semantic embeddings, which are obtained from hidden states of Large Language Model Meta AI 2 (LLaMa-2) by inputting descriptions of images into the model. We also extract visual features from EEG signals to improve the quantity of the reconstructed images at low levels. Then we fuse semantic features and visual features by applying a pre-trained diffusion model and finally generate the corresponding images. We are the first to incorporate the LLM into EEG visual decoding tasks. Our method achieves the state-of-the-art result of EEG classification accuracy and the quality of reconstructed images on ImageNet-EEG datasets. In one word, our work is an important step forward in the field of exploiting the relationship between language models and human visual cognition. Our codes are available at unmapped: uri https://github.com/lay-atsa/llm4eeg.