One disadvantage of the GPT (Generative Pre-training Transformer) model, specifically the original GPT and GPT-2 versions, is that they have a tendency to generate text that is biased or prejudiced. This is because the model is trained on a large dataset of text from the internet, which can contain biases and stereotypes. Additionally, GPT-2, in particular, is also notable for its ability to generate highly coherent and fluent text, which can be used to create fake news, impersonation and misinformation. Another disadvantages is that GPT models typically require a large amount of computational resources and memory to train and run, making it difficult to use on devices with limited resources.