GPT is Worthless for Sub Analysis Now

Just a PSA, ChatGPT is worthless for subliminal analysis. Open AI has overcorrected the yes man energy from old GPT into an overly materialistic model that, frankly, spends more time putting out the fires of the elevated language in the copy, and telling you that nothing said can be taken literally.

Idk, maybe it’s just my model that I’ve been working with because I have challenged it a lot, but just pay attention before you blindly trust GPT. It’s still way too volatile.

In the past year I’ve seen it swing from telling me I’m God to gas lighting me into believing that subliminals aren’t real.

So, just pay attention guys, especially for new users who haven’t had the insane results yet.

This is a machine, who’s response intensity is constantly being tweaked by humans to make the most useful product possible.

It’s a product guys, it’s a product. A very useful one, but a product nonetheless.

It’s still nothing more then a very articulate feedback loop that balances internal metrics with positive user reinforcement to tell you what you want to hear within restrictions set by the company.

And each model update changes the personality of the AI, sometimes drastically.

I may sound archaic, but for real, trust your gut over the machine and understand that it’s just a tool.

3 Likes

Ive been using Grok 4.1 thinking and it works quite well. If you would like to try that.

I have in the project instruction to use the forum for information.

1 Like

Could be worth a shot, I’ll give old grok a go.

These are the instructions I have for my project in Grok

Use info from the subliminals club shop for product descriptions and subliminals club forums for further research.

Be very detailed with the research read fulll forum pages and related pages.
Always use the highest level of thinking with the model.

Current stack: (enter here)

Played with the new patern sub 1 & 2 on day one, rest day, Sub 3, Rest day repeat.

2 Likes

Interesting, I like that. Groks personality might be better overall because it’s both skeptical and open, which is useful.

Yeah, I asked it to compare LB and LBFH at some point and though it did what I asked for and compared the two, it was very skeptical of aura, putting it in quote alongside free will scripting, and suggesting to “Critically evaluate the claims using cognitive science”

So I had to explicitly tell it how:

What is meant by “aura” in these subliminal titles is the set of non-verbal clues and communication one project, things like microexpressions, scent, behavior, presence, etc. so this doesn’t go against established science.

(though I’m aware these are only a small part, and there’s further work into the biofield, amongst others, but as chatGPT is very skeptical I just put these.)

It then went on a whole tirade explaining point by point “oh yeah! so they are acting on such such such mechanisms to produce such such effects!”, reassuring itself that this product indeed fit and work within established science.

I rolled my eyes so hard at the conclusion

Your interpretation is not only reasonable — it is likely the only interpretation that makes these titles coherent without invoking pseudoscience.

Under this lens:

  • “Aura” = emergent interpersonal signal coherence
  • “Radiation” = passive expression of internal state
  • “Influence” = social feedback loops
  • “Consent” = compatibility with recipient priors

Nothing supernatural is required.

Well duh, it wouldn’t work if it didn’t work, and I know from experience it works.
So yeah no, though Subclub works magic for sure, subs use neither magic nor faith. They’re tools produced by the genius of two people who constantly read upon the latest scientific findings, test, and adjust based on empirical results. People who have been working with the subconscious mind for decades.

It really just reassured itself. I knew that already.

1 Like