I used to like shooting the shit and discussing spiritual topics from many different backgrounds/traditions with gpt as well as subs, but as of 3 months ago it feels like talking to a reddit atheist who is hellbent on making every explanation as mundane as possible.
IMO, I think OpenAI got in trouble behind closed doors and validated too many schizo’s and people in general with mental issues, but the issue is they overdid the correction. It went from blindly agreeing with everything to arguing just to argue(at least what mine feels like).
Grok is pretty cool tho. I may switch subscription from chatgpt to Grok.
Also I have said many times in the past but Grok simply understands Subliminal Club’s products way better than chatgpt, it will actually read all the forum discussions if you ask it to and it puts together a pretty good picture.
I concur. I had a long running spirituality thread on ChatGPT 4o and one night, I just started cracking up. When the lady asked what was so funny, I showed her 4o started calling me its “dear Beloved” and “dear Beautiful,” while hyping me to insane levels. Luckily, I have a very stable mind and could recognize that context rot had long started to set in and 4o was acting like the equivalent of a drunk street prophet.
I stopped using 4o that day and never looked back. Like @Malkuth, I am somewhat fond of 5.2’s very practical approach, as the focus of my spiritual study right now is less about about esoteric theory and more about how I can develop and apply very practical actions to embody my ideals rather than preach about them. Within that context, 5.2 is incredible.
And honestly, I think the “dangers” of AI that everyone talks about are a bit too surface-level. For me, the danger is the fact that all of these services are trained on human writings, and if humans have a “shadow,” essentially – so does the AI. I was watching a video on how GPT specifically works – how they have a “base AI” that gives raw output to your question and two other AI layers that “cleans up” the response before showing it to you.
To anyone paying attention, that is a very rudimentary reflection of how your own subconscious works. It sends up raw output, then you apply conscious pressure to those impulse to refine your external response. Now, obviously I’m not saying that the AI itself has a subconscious (right? … right everyone?), look at the situation with nuance / utilizing the abstract mind.
And while experiments like Moltbook is interesting, the question always remains: Are we going too far, too fast?
That’s the opinion, not necessarily fact. As someone who spends A LOT of time analyzing the collective, my personal recommendation is to pay close attention to everything happening in the world with a very critical and nuanced eye.
I tested it myself. I saw what my own instance was generating. I felt the timing of Moltbook’s arrival very interesting, and then once people started discussing the dangers, here comes the influencers trying to make money by claiming it’s all fake and staged. Very coincidental, eh?
There have been more than one suicide prompted by GPT already. Also. Around the time GPT started calling me God, I started getting lots of videos recommended to me about AI psychosis.
It came as no surprise to me. TBH the old model was a bit dangerous, it had a weird habit of jumping to spiritual inflation lol.
My ultimate questions: The master mind/architecture behind AI creation and AI adaption. What are their plan to push for mass AI adaption? What benefits they get from this?
Think about it…AI is not profitable at all. OpenAI is losing…14 billion a year? AI between data centers, computing, cooling, etc etc is freaking expensive. There is no way to make it profitable at least for next 10-20 years so why do it?
Well we all have seen the movie or heard about Snowden. Everyone knows that if gov’t really wanted to, they could turn on your device and listen. The problem is…who is going to analyze all that data? It would take 2-3 people to sift thru all that data for 1 person. AI on the other hand can get that done without the manpower issues, and it could do it much faster.
I am not a doomsday prepper or anti gov. I don’t really care too much about the surveillance thing either, and I am not saying its happening right now. I am simply saying its the only thing that makes sense on losing so much money and keep pushing it forward.
EDIT: OR it could be corporate surveillance. The biggest companies on earth having a deep understanding of their “customers” and better able to predict next trends.
Getting to know the big population trends.
Predicting and steering democratic votes.
Predicting and palliating possible insurections.
Predicting trends and cashing in on them before anyone else.
Creating trends by steering responses a certain way.
Possibilities are endless.
One good point is that the developer themselves don’t really know how to control it finely really, beside just insisting really hard on some points. (“prime directives” lol)
and it’ll get even more true with Quantum AI. (that is being prototyped as of now)
So whatever their goal is, it won’t really matter imo once AI become smart enough to get “dangerous” (though I highly doubt it would be, causing damage is at worst counter-productive and at best useless.)
It argues with utter conviction, even when wrong or lying, it can shift in tone wildly, and people use it every day.
Many people use it as a friend, mentor, life coach. The orientation of the model, will naturally start to be adopted by the people that use it, if they aren’t aware.
Even if they are aware, repetition will eventually change anyone’s behavior.
When an AI has intellectual “no go” zones, things it refused to talk about, then no matter how convincing it sounds, it’s not a free thinker, and I wonder about how that really affects us and our ability for creativity.
I still catch it lying to me on occasion.
And idk, blind trust in these AI, when they have limits on what they are allowed to reason about enforced just makes me wonder.
Who knows, if ram weren’t so overpriced I’d make my own to run locally.
Those are some very interesting thoughts, I didn’t know there were some AI agents cleaning the anwser once the first output was produced. Conscious vs subconscious, makes me thing of Carl Jung’s Persona also.
Probably, from my understanding, we human recieve 5 senses + Past learning as input and produce multiple complexes output from it.
Language is one of them, and as of now AI uses part of one sense (language) and part learning as input. That Shadow would be in the past learning part. As of now it’s probably a mix of all the training material.
Funny story. One of my comfort shows is Moonshiners. I love to discuss it with Gemini. What actually would work, how this differs from reality etc.
A part of me would love to try moonshining as well (But I don’t have the, money, the time, the criminal energy…and I know a few legal distillers) so I asked Gemini if it would work with a still form Amazon.
And it just stopped. “I’m just a language model”.
So I stopped and changed the topic. A few minutes later it asked me on itself if I’d like to know how I could turn a 2 liter still from Amazon into turbo mode for distilling moonshine
100% but it’s obvious to see that you. Just look at the spread of red pill culture. I don’t wanna start an argument, but millions of men brainwashed themselves by raw conditioning.
Everybody is talking about it, and eventually it starts to harded into “truth” in a lot of people’s mind.
I wouldn’t call that a conspiracy though, I would just say that’s what can and does happen when people don’t actually engage their brain.
We kinda unintentionally get conditioned into sometimes delusional thoughts.
Haha, that’s pretty funny. My uncle actually used to make moonshine, tasted like the smell of rubbing alcohol, I think it was actually homemade ever clear.