Main Disc. Thread - QTKS: Custom Subliminals In Your Own Voice

I’m reluctant to say that we’re saving the best for last. This may be the last scheduled announcement, but the STKS era is NOT over – it’s only just begun. We’re going to be engaging in surprise drops (as well as regularly scheduled releases and the such) all summer long, so keep your eyes on the official STKS thread here.

As for today’s announcement: We’re still working out all of the details, but we are currently testing a new version of the Q Customs, which we’ve dubbed QTKS: “Quintessence,” the King Speaks (working title, we may change it).

Why’d we name it this? Well, the name is quite literal. With QTKS, the embedded voices used for the custom will be YOURS. You’re the king. It’s your voice that will help you achieve your dreams the most.

We’ve teamed up with a number of voice cloning providers and developed a method using the Dolby API (which we already use when processing Solace-masked customs) to create a subliminal build that is absolutely unrivaled. This is NOT just the customs except with your voice instead of the high quality commercial voices we licensed. We had to create an entirely new build process for this, one that allows us to produce even more effective titles.

The process to build the standard Q customs is largely automated. Since we special ordered the voices we use, we were able to apply a standardized mixing and mastering process to them. Things like EQ, compression, etc. With the way subliminal audio works, it was hard enough figuring out how to create a title with the Solace mask. We could’ve always used the legacy mask, as it’s so loud and overbearing that we could work with that one easily, but we really needed it to work with Solace, as we don’t go backward.

It’s taken over a YEAR to figure out how to get it right, but not only have we managed to pull it off with Solace, we can even do it with a lightning mask, which has far more moments of quietness and near silence than Solace. This means that some of this technology can be backported to regular Q, where you will be able to choose different masks. More on that another day. Back to QTKS:

I don’t want to “overhype” the product, as we’re still testing it (including @Fire , since this has been his brainchild since the beginning) but early results with the offline testing group have been absolutely incredible. Testing will occur here on the forum, but the testers (at least the first batch) have been pre-selected and are already aware of their status. Only one of those testers have actually received theirs as of yet (again, these take MUCH longer to process and I’m currently building all of them myself), with more going out tomorrow.

Earliest impressions is that the technology feels completely different, so different that they asked me if it was still considered “Zero Point.” We’ve received multiple reports from our offline testers of minimal recon, and that the recon that does occur passes very quickly, as they seem to intuitively understand what’s causing it. This act of “understanding” seems to resolve the recon almost immediately, especially if the user takes action. The sole on-forum tester at the moment has not reported recon of any kind, just a very organic unfolding of results.

There have been other GOOD results, particularly a rather mind-blowing experience that Q-Programmer, @CatalystPrime reported. We will not be sharing that one yet, as we want to see that same kind of effect with the other testers. When / if it happens, we will share it.

As for pricing, we’re just going to be honest: we are still working out the details, but this is considered a very premium service. It does and will not replace the standard Q custom service. In the “Early Access” period, we MAY offer a lower price, which will increase later. Part of the reason is that these have to be monitored and tweaked by a live person while building. Even if we automate much of the process, there’s still a portion that will be hand built for the foreseeable future. This is not a service for a “first custom” either. This is for those who are “locked in,” where you’ve tried a number of products, got good results and to enhance those results.

We’re going to leave it here while we prepare for testing. I know there’s A LOT of questions – go ahead and ask them, but keep in mind we’re still working through everything. We know there will be privacy concerns about the cloned voice (right now, as soon as the build process completes, Q automatically deletes EVERYTHING) as well as questions about how the voice sounds (we’re still getting feedback, but the on-forum tester reported that it was “trippy” how accurate it sounded). By the time we launch, we’ll have answers for everything.

Questions, comments, concerns? Post 'em.


What in the absolute fuck?

This is insane, talk about the cutting edge.

This will truly bring results to the next level, the scripting in our own voice is ridiculous. Are we going to submit voice samples or something, how would it even work?

This definitely sounds super pricey but very much worth it; unfortunately this is going to work against many of us habitual title changers :sweat_smile:


So I was right with this atleast :grin:

This is something so entirely new, I doubt any sub company out there would have even thought of this. :astonished: Wonder how it would work though.


What the fuck that is so much more impressive than I could have imagined.

I was over here thinking it would be mind blowing to have name embedded store titles


Truly, what in the fuck you’re mad geniuses I could have never predicted this


Yeah I’m guessing there will be a script we have to read out and upload as an MP3 to train the AI on our voice to make the custom. Very very cool


Where do I sign up? After using 30+ customs this would be a interesting experience to try when it comes out🫡


Yes, you will read a script for a number of minutes. You get to choose the script, it just needs to be something that can generate a bit of emotion for you. As a standard template, we’ve been giving people the ending of The Great Gatsby:

“And as I sat there, brooding on the old unknown world, I thought of Gatsby’s wonder when he first picked out the green light at the end of Daisy’s dock. He had come a long way to this blue lawn and his dream must have seemed so close that he could hardly fail to grasp it. He did not know that it was already behind him, somewhere back in that vast obscurity beyond the city, where the dark fields of the republic rolled on under the night. Gatsby believed in the green light, the future that year by year recedes before us. It eluded us then, but that’s no matter—tomorrow we will run faster, stretch out our arms farther… And one fine morning… So we beat on, boats against the current, borne back ceaselessly into the past.”


is this build a much better upgraded terminus squared build?
will the terminus squared build be removed for this new voice build?
is there any development in the number of cores and modules?


We cannot compare these yet. Not enough data points. From the early responses, it has a much different feel than Terminus Squared. It doesn’t have that “my head is full of information it must process” feel, while generating results much faster.

Terminus Squared will probably not be offered at the start. Just the QTKS build.

To be determined. We are currently testing the maximum of 20 modules. Definitely will have to stick to a hard limit regarding cores.


I request that we be given samples to download to listen to for a few days. That way we can compare for comfort; ear fatigue; sound versus quiet ratio; etc.


By the way, I didn’t realize how fatiguing that waterfall mask was until I compared it to Solace.

Also, those chirping bird are awesome. Naturescapes are cool.


The text can be in French @SaintSovereign ?

1 Like

Didn’t see that coming.

Gg and gj, Subclub!


Normally when you hear your voice replayed back you hate it (editing podcasts are the worst). I believe it’s called “voice confrontation” something that causes us not to enjoy listening to our own voice on recording, at least consciously.

It’s interesting that this produces such positive results. I wonder how the subconscious receives your voice and why it does that way when your conscious mind is repelled from it. (At least in my experience). I cannot listen to my own podcasts.

It might be a trade secret which I respect, but why do you think the results are so good when voice confrontation is something many people have?

At first I was like this makes complete sense, but then I immediately thought of the voice confrontation phenomenon right after. Kind of a wierd paradox.

Also love the fact of a thunder mask! Been wanting that for a long time next to a fire mask.


That’s because we hear ourselves via bone conduction through the skull. Which allows for deeper bass tones which don’t propagate through open air as well.

My voice sounds goofy on recordings :joy:


is this the ZPv3 u’ve been talking about?

ZPv3 is still in the future. This be summat else, mi’lad.


As I understand it, there IS a definite difference between the voice you hear on a recording, compared to the voice you hear in real life.

We hear additional vibration added to our voice as our throat, sinuses, and even chest & diaphragm lightly vibrate during our speaking - a physical sensation that changes the auditory perception, in a way that’s generally enjoyable.

My questions are

  1. does our subconscious brain fully and completely recognize our voice as our voice? Does it tease out the difference between pure sound and vibration-influenced sound waves? Or does it also expect our voice to have that vibration and additional tones not found it the recordings?

  2. as others have mentioned, does our subconscious LIKE the sound of our voice?

  3. just because CONSCIOUSLY we can’t detect a difference between our voice and the AI voice… can our subconscious detect a difference? Does it matter?


called it :wink:

very excited!