Anyone else get the feeling that GPT-3.5 is becoming dumber?
I made an app for myself that can be used to chat with GPT and it also had some extra features that ChatGPT didn’t (but now has). I didn’t use it (only Bing AI sometimes) for some time and now I wanted to use it again. I had to fix some API stuff because the OpenAI module jumped to 1.0.0, but that didn’t affect any prompt (this is important: it’s my app, not ChatGPT, so cannot possibly be a prompt cause if I did nothing) and I didn’t edit what model it used.
When everything was fixed, I started using it and it was obviously dumber than it was before. It made things up, misspelled the name of a place and other things.
This can be intentional, so people buy ChatGPT Premium and use GPT-4. At least GPT-4 is cheaper from the API and it’s not a subscription.
Anyone else get the feeling that GPT-3.5 is becoming dumber?
I made an app for myself that can be used to chat with GPT and it also had some extra features that ChatGPT didn’t (but now has). I didn’t use it (only Bing AI sometimes) for some time and now I wanted to use it again. I had to fix some API stuff because the OpenAI module jumped to 1.0.0, but that didn’t affect any prompt (this is important: it’s my app, not ChatGPT, so cannot possibly be a prompt cause if I did nothing) and I didn’t edit what model it used.
When everything was fixed, I started using it and it was obviously dumber than it was before. It made things up, misspelled the name of a place and other things.
This can be intentional, so people buy ChatGPT Premium and use GPT-4. At least GPT-4 is cheaper from the API and it’s not a subscription.
Every time they try and lock it down more, the quality gets noticeably less reliable