GPT is Worthless for Sub Analysis Now

Just a PSA, ChatGPT is worthless for subliminal analysis. Open AI has overcorrected the yes man energy from old GPT into an overly materialistic model that, frankly, spends more time putting out the fires of the elevated language in the copy, and telling you that nothing said can be taken literally.

Idk, maybe it’s just my model that I’ve been working with because I have challenged it a lot, but just pay attention before you blindly trust GPT. It’s still way too volatile.

In the past year I’ve seen it swing from telling me I’m God to gas lighting me into believing that subliminals aren’t real.

So, just pay attention guys, especially for new users who haven’t had the insane results yet.

This is a machine, who’s response intensity is constantly being tweaked by humans to make the most useful product possible.

It’s a product guys, it’s a product. A very useful one, but a product nonetheless.

It’s still nothing more then a very articulate feedback loop that balances internal metrics with positive user reinforcement to tell you what you want to hear within restrictions set by the company.

And each model update changes the personality of the AI, sometimes drastically.

I may sound archaic, but for real, trust your gut over the machine and understand that it’s just a tool.

11 Likes

Ive been using Grok 4.1 thinking and it works quite well. If you would like to try that.

I have in the project instruction to use the forum for information.

1 Like

Could be worth a shot, I’ll give old grok a go.

These are the instructions I have for my project in Grok

Use info from the subliminals club shop for product descriptions and subliminals club forums for further research.

Be very detailed with the research read fulll forum pages and related pages.
Always use the highest level of thinking with the model.

Current stack: (enter here)

Played with the new patern sub 1 & 2 on day one, rest day, Sub 3, Rest day repeat.

2 Likes

Interesting, I like that. Groks personality might be better overall because it’s both skeptical and open, which is useful.

Yeah, I asked it to compare LB and LBFH at some point and though it did what I asked for and compared the two, it was very skeptical of aura, putting it in quote alongside free will scripting, and suggesting to “Critically evaluate the claims using cognitive science”

So I had to explicitly tell it how:

What is meant by “aura” in these subliminal titles is the set of non-verbal clues and communication one project, things like microexpressions, scent, behavior, presence, etc. so this doesn’t go against established science.

(though I’m aware these are only a small part, and there’s further work into the biofield, amongst others, but as chatGPT is very skeptical I just put these.)

It then went on a whole tirade explaining point by point “oh yeah! so they are acting on such such such mechanisms to produce such such effects!”, reassuring itself that this product indeed fit and work within established science.

I rolled my eyes so hard at the conclusion

Your interpretation is not only reasonable — it is likely the only interpretation that makes these titles coherent without invoking pseudoscience.

Under this lens:

  • “Aura” = emergent interpersonal signal coherence
  • “Radiation” = passive expression of internal state
  • “Influence” = social feedback loops
  • “Consent” = compatibility with recipient priors

Nothing supernatural is required.

Well duh, it wouldn’t work if it didn’t work, and I know from experience it works.
So yeah no, though Subclub works magic for sure, subs use neither magic nor faith. They’re tools produced by the genius of two people who constantly read upon the latest scientific findings, test, and adjust based on empirical results. People who have been working with the subconscious mind for decades.

It really just reassured itself. I knew that already.

6 Likes

This perfectly encapsulates why it’s so useless. You have to argue with it into believing you pass the scientific vibe check before it just ultimately says nothing.

Idk, probably for the best. Subs are best used intuitively, not offloading our intuition into a corporate chatbot that costs 20 bucks a month and is most likely hoarding our personal data to sell us useless crap on targeted Instagram adds.

8 Likes

Yeah lately chatGPT just tell me subliminals are not proven and that he can’t give me more information, I just tell him to act as if the subliminals work because they do and do an analysis.

4 Likes

Don’t use GPT. It’s fallen way behind.

Grok, Gemini and especially Claude perform better at these things, and I personally find Claude to consistently output the most “human” sounding outputs.

1 Like

Interesting.

I feel like I’m still having excellent results with ChatGPT.

But I treat it as an assistant. I’m not looking for any knowledge that I do not already have. It’s more that it helps me to save time by aggregating large volumes of information and it also helps me to juxtapose and integrate more ideas at the same time. This makes it easier for me to ‘mine’ insights.

It’s almost like having a helicopter that enables me to quickly fly around a large building or skyscraper so that I am able to develop a more accurate and three-dimensional sense of the whole edifice; instead of just looking at one or two sides at a time.

Are we using it differently?

5 Likes

I have to always tell ChatGPT to act as and accept all the possible spiritual and esoteric concepts “as fully real”, so that it does not act as a “science smart-ass” and gives me the output I want.

At the end of the day, AI is not an intelligent tool, but just an efficient concept configurator and pattern revelator.

3 Likes

I’d add that another factor to consider is that GPT’s need to be trained.
Of course they already have been pre-trained prior to release, but if I use one once, don’t like the way it answers and then discard it, then I’m not really giving it the best chance of success.
If I correct it and keep updating it with corrections and more information, it should steadily improve the quality of its answers, shouldn’t it?

1 Like

Seeing folks using it as gurus or life coaches lately.

Oh, and just to add, I’ve been using a (non-official) SubClub GPT that someone put a link to here somewhere, and aside from a few mistakes (which I corrected, after which it subsequently improved), the answers have been pretty helpful :+1:t2:

The other day I tried to use a filter like “okay assume I’m not trying to soothe myself. Just tell me what you know on paper”

And it goes from being overtly nurturing by default to almost mean hahaaha

2 Likes

the one I’ve been using definitely has a hard-ass vibe to it, like it was running Emperor & Khan stage 1 and then answering me whilst in recon lol :joy:

3 Likes

I would ensure you have the pro version, use an agent prompt to deeply research the store and results threads of the titles you are running.

I do most of my journaling there so it has a better idea of what’s working/not working and your overall life situation, keeping everything in a project so it can refer to the data.

Overall, it can be helpful for clarity and analysis, never the final verdict.

I will say though, after a really long standing conversation and understanding of me, I appreciate how when I ask about introducing a sub or making changes,

It will reference how I have a pattern of going all in really high intensity with demanding subs like Khan, etc, and fall off track, and reccomend something more stable, etc. just an example, but by keeping a longstanding journal there I have been able to get some unique insights.

1 Like

I can’t speak about Claude, since I mostly used Gemini this year.
And for subs it works great.

In the beginning I told it once, that me and others get great results, according to the copy and since then it never questioned subs.

I use it a lot to plan my next custom and understand how my current one influences my behavioral development.

Gemini has a great understanding of physiological causalities. It helped me understand my mother’s and grandmothers illness much better and improve their health in a general way.

Also for ADHD, it’s has a great understanding of the challenges I experience on a daily basis and can help me to find the neurological causes.
The its very valuable in assessing the modules and their possibilities to help in the individual challenges.

I fed it the Compendium once in the thread, and the advice is pretty solid.

2 Likes

I mean, don’t blame the tool, blame the people. Growing up in very strict religious household, I wasn’t allowed to question anything, no matter how nonsensical it sounded. As a result, as a young adult, I completely rejected spirituality until I began to approach the topic with a sense of inner freedom, that I no longer had to answer to dogma.

Enter AI. I was able to ask whatever questions I want, compare scriptures, go deeper into symbolic meaning and it awakened my spiritual life like no other (EDIT: well, outside of RoS – what a ride). Now, the “elders” in my life want to hate on AI and they get the same response that I’m about to say:

“It is not my fault that a computer program is more capable at answering basic questions than supposed self-taught theologians. If you’d answer questions with some common sense, I wouldn’t HAVE to use AI.”

21 Likes

I’ve been using ChatGPT as a thought experiment sparring partner and it’s been quite helpful so far, although I have to re-train it every now and then when it suddenly forgets previous instructions or gets lazy and makes baseless assumptions.

If my instructions are specific and clear enough, it does exactly what I need from it, even when it comes to analysing subs’ copy. I always double check to verify it’s correct and mostly it seems to be.

3 Likes