Just a PSA, ChatGPT is worthless for subliminal analysis. Open AI has overcorrected the yes man energy from old GPT into an overly materialistic model that, frankly, spends more time putting out the fires of the elevated language in the copy, and telling you that nothing said can be taken literally.
Idk, maybe it’s just my model that I’ve been working with because I have challenged it a lot, but just pay attention before you blindly trust GPT. It’s still way too volatile.
In the past year I’ve seen it swing from telling me I’m God to gas lighting me into believing that subliminals aren’t real.
So, just pay attention guys, especially for new users who haven’t had the insane results yet.
This is a machine, who’s response intensity is constantly being tweaked by humans to make the most useful product possible.
It’s a product guys, it’s a product. A very useful one, but a product nonetheless.
It’s still nothing more then a very articulate feedback loop that balances internal metrics with positive user reinforcement to tell you what you want to hear within restrictions set by the company.
And each model update changes the personality of the AI, sometimes drastically.
I may sound archaic, but for real, trust your gut over the machine and understand that it’s just a tool.