The Road to Alexandria (IG: UPX)

This is one of the more strange uses of regex that I’ve seen before. Can you imagine trying to apply this to a really large prime number? You would need a string of billions of ones. Not the most efficient, but still a cool trick.

Prime bit sequences are really interesting. It seems for any sequences of n bits where n > 4 and n <10, there is an increasing prime number of bit sequences that fit within that series of bits where the uppermost and lowermost bits are 1. So for example there are 5 primes that can be represented with 5 bits, 7 primes in 6 bits, 13 in 7 bits, and 23 in 8 bits, 43 in the 9th bit. The sequence doesn’t break until the 10th bit, where there are 75 primes with bit 10 set, which is a non prime. Also, there are many sequences where the bit sequence between the 1s can be reversed to yield another prime.

Also, this is a very cool website I hadn’t found until today.

https://oeis.org/

There is definitely a balance point between the programming knowledge/logic and the other mental components. Every now and then I will be reminded of some aspect of what I’ve read in the past that hasn’t yet been fully integrated with the other parts of my learning.

Woke up this morning after perhaps half a bottle of G&T just wanting to sleep for the first few hours, and feeling my liver telling me what a silly thing I’d done. Although later hearing Ripperger on what he does to get away from his job as an exorcist, to which he replied “good Scotch and cigars”, explaining he doesn’t let the demons tell him how to live his life, and seeks balance, I had a good chuckle about that because I’m the same in many ways, I recognise the need for temperance and not having purity just for its own sake.

However, feeling the effect of the previous night on my gut, I mused that there had to be some way to iron out those after effects quicker than they would otherwise disappear without intervention. I set it to the side for the moment and was watching a video of Tucker Carlson talking as he does about the current state of the world before talking about UFOs, and eventually he gets to talking about the heat signature that was recorded off one of these craft by a raptor pilot, which had the heat towards the bottom of the craft rather than the top. That got me thinking about Schauberger’s old paradoxical idea of “falling heat and rising cold” and caused me to go on a little research sideline to remind myself of his work. This ended up being the right course of action.

This is one essay on the topic which is pretty good, but its still a very difficult topic to wrap ones head around. In reality, there is this interplay between density, energy levels, frequency and the gradients of these which occurs. Somehow living bodies including the Earth end up concentrating heat at their core (think of the human body for example), whereas we are used to thinking of heat as rising because the example of the cigarette lighter which Tucker gave against the palm seems to show that heat rising, but in reality that is only one manifestation of heat.

I’m still learning, but reading Schauberger again gave me the idea for an experiment in wrapping myself in furry blankets while running my lava lamp in the background putting off low frequency light in the orange/red range, and then focusing my own mind towards the center of my body and particularly in the root, trying to create a gradient that allowed for the direction of that absorbed and reflected heat into the core of the body to build up form.

I ended up losing consciousness for some time and waking up to find all of the earlier malaise gone. I need to ensure this effect is reproducible, and experiment, and continue reading and rereading the source material. There are certain things in Schaubergers writings and the writings of those who write about him that just kind of leap out at me. Things to do with creating higher frequency energies through the right type of motion, which I need to fully understand and integrate. If I can use these techniques to heal my own personal body ailments and improve the quality of my mind, then I can use this to improve my own ability to think clearly and thus to code my VR applications. Half the reason I have difficulty focusing on the code is because of bodily pain or disharmony, so it follows I should apply my skills to that area first to tune or tweak that aspect of my life just like I tune or tweak my editor or my code.

These are the kind of thoughts that are going through my mind as I finish the evening. I need to ensure I don’t pass over this aspect of my interests like I can do others and continue to figure out ways to use it to improve the whole process of my business.

Back to work tomorrow. New pack of smokes, new pay hopefully. Will make sure not to waste it no matter what happens on the news.

1 Like

Finally figuring out proper configuration of emacs for my setup. New changes:

  • Installed AutoHotKey v2 and mapped my right control to insert C-x @ s, to add super modifier to the next key pressed
  • Uninstall MSYS2 from D drive which has no hard link capability, reinstalled it to C drive, install gcc via pacman, add MSYS to path
  • Install org-roam in Emacs
  • Run M-x org-roam-db-autosync-mode to start compiling org mode sqllite server.
  • Now that I have MSYS2 in an appropriate folder, optionally reinstall pdftools to allow viewing pdfs from within Emacs.

Now continue watching Getting Started with Org Roam - Build a Second Brain in Emacs - YouTube

Mental note: Never ever again set JAVA_HOME as a global environment variable. Ugh.

Just spent the last hour trying to figure out why my app wouldn’t sign. It was because I had set JAVA_HOME when installing vlc skin editor, so it could use a newer version of Java. Turns out, Android SDK doesn’t like the newer version of java and so apksigner suddenly could not read my key store I had previously created.

And still, my app does not even reach my asset reading code and bombs out trying to load a Vulkan extension set_debug_utils_object_name_ext. I’m hoping, praying its because I compiled my damned program in debug mode accidentally. Will have to wait till tomorrow to figure it out. Too exhausted by this point.

1 Like

Slowly getting to a place where my development environment works, but its such a struggle just to have it all integrated! I discovered in Emacs you have an eshell, which is a pseudo-shell with the ability to run for loops on a space separated list or on the expansion of output of another command. When combined with msys2, I can start to get basic command output within a minibuffer and run build commands from within emacs, but unfortunately only on a dumb terminal.

So I can save my build commands within an org mode document using org-roam and then execute the build, and it runs fine, although the output is not syntax highlighted, and then copy the output APK filename and run apkanalyzer files list with the name enclosed it single quotes to prevent the file name from being escaped, and it will faithfully print out the list of resources stored within the APK as a sanity check.

But by this point with adb install not printing its output properly with a dumb terminal, I should run stuff in a proper command prompt anyway. Only reason a shell is useful to me is for proper command line editing, which Windoze sucks goats balls on. No, I don’t really want to use Powershell, what I really want is a cmd replacement with proper command line editing and pipes, which may very well be a pipe dream (alangya). But at least I can debug my program now and its producing proper debug info into the logcat for me to start debugging it properly.

1 Like

The day’s work related activities: downloading material from the Android and Oculus websites relating to android manifest properties to be set for VR apps and ordinary Android apps, and what they all mean.

There is an incredible amount of data to be reviewed, saved and stowed in case of internet blackout.

The minimal config file includes android permissions, features used and versions, intent filters, runtime libs to link, path to assets folder, a list of config changes an app should be able to handle on its own without the app restarting, signing keystore path and password, minimum and target sdk versions, whether the app is debuggable, whether it is full screen or not, which supported devices it runs on, and app labels, as well as architecture, and only then do you start having enough information to build the app!

Printed information about permissions, intent filters, features etc to pdf in case the developer website ever goes down. Right now my folder of pdfs relating to Android, not including some recent additions, is sitting at the very modest size of 250MB. It is likely to reach at least 320 by the end of it all.

I’m only documenting the basics in my org docs and leaving the rest for pdfs, before I move on to dealing with using openxr to obtain input from the controllers and poses. Then my next steps are to swap out the textures and position them properly, and start dealing with the logic of collisions and interactions.

EDIT: continued reading code of the library. It feels like I’m going to have a few weeks of reading code just to fully understand the system that’s been implemented. Some aspects are simple enough to understand, the others require digging down through several third party libraries.

The basic structure so far as I can tell is all items added to the engine.world have a tuple of a collider and a body type, and understanding the physics of the engine means understanding these. Some of the colliders are simple; a plane will generally have a half-space associated with it according to the plane’s normal vector. Other colliders are more complex like capsules predicated on a specific axis. Colliders can have “restitution” associated with them, and different collision types can be detected.

Then there are the hands and how they are animated. Different values for the depression of the grip controller input are animated with different hand poses, and there can be blends between different hand poses. As I’m reading through all this stuff it’s hard to know just how much of this I’m going to have to understand to implement my own custom hands or override the default hands system, or if I should even be trying to do that at first or just focusing on the creation of custom mesh objects.

1 Like

Finally found a decent tutorial for skybox creation in blender:

The key step I was missing was flipping the normals. This is also an earlier version of Blender, and I’ll be using multiple image textures, so it may be a little more work.

Technically though I could create separate mesh planes for each direction though and name them appropriately and save them all within the one exported glb. Key thing would be making the normals point the right direction and making the plane shadeless.

KTX format appears to have support for saving cubemap textures in a single file; now I’m kinda wondering whether Hotham, which imports a bunch of ktx2 textures to set up its environment, already has support for cubemaps built in. So I guess for me its back to the source code and docs to start reading how it uses those imported textures and how the texture mapping code works in general. It would be really nice if I could set up a cubemap with only a few lines of code.

The example cubemap I’ve been reverse engineering according to the file format (KTX 1.1, since the KTX2 tools won’t read it) are compressed using an internal format which once I mapped it in the .h file was Ericsson Texture Compression v2 compressed RGB8, which offers 6x compression of texture data, which is why the uncompressed data at the end of the 0x40 sized header was only 0x80000 bytes, despite 0x400 * 0x400 being 0x100000, times 3 bytes per pixel = 0x300000. Apparently it does this by converting 4x4 groups of pixels into 64 bit words. That’s a pretty nifty trick! It does that by specifying a base color in half as many bits for each 4x2 block and then having a 3 bit brightness level, and then offsetting from the base level with a small number of bits (4 signed values, or 2 bits) for each pixel.

And now several hours later, just found a way to convert ktx to ktx2, in the somewhat repetitively named ktx2ktx2.exe in the distribution. So this should make things a bit simpler!

1 Like


OK now we’re starting to cook with gas.

This still isn’t perfect, but its a good start.

Prompt: “A velvet red plush couch with furry cushions enclosed within a glass observation dome with rich indigo carpets floating in space, lit by multiple tealight candles which give off a warm orange glow. The observation window faces a breathtaking view of the earth suspended in space with the sun rising behind it.”

3D view:


Need to figure out how to map something like this onto a skybox now. This isn’t your typical cubemap texture unfortunately, I need to be able to transform this into the 6 ktx2 textures required to bound space.

4 Likes

Exporting the cubemap turned out to be relatively easy. Relatively in that I spent an age installing nodejs locally, used npm install on the HDRI to cubemap package from github, and fixed easily resolvable security issues with the npm audit command, added an environment variable to cause node to run with the legacy openssl provider on my localhost, and started the HDRI to cubemap as a local server. I was then able to upload the hdr image created by Skybox AI and export it as a series of cubemap files like i wanted in png at 1024 pixels per wall.

Next I started looking at the code for the hotham example I’ve based my startup program off to understand how dimensions work in irreality. The example scene scales the damaged helmet to half size and translates it to -1, 1.4 x and y. So calculations on the size in the glb file show the helmet as 1.89 x 1.8 x 2 (x, y, z) with a slight rotation applied. This should mean that the helmet shows up in 3D space at 0.95 x 0.9 x 1. So now my task is to find a measuring tape and make sure everything I’m seeing checks out and that the translation matrix acts in meters. If it does, great! The generated room if I’m looking at it correctly is about 3 x 3 x 3, no more than 5 x 5 x 5 actually, so I can play with the size of the skybox.

I have to pack the glb to include the texture within the file, and I plan to do it one per wall, just like the floor. The generated textures are fairly big, 6.8MB in total and ranging from 0.9MB to just under 1.5MB per wall. However the walls are static and shouldn’t change. It would be nice if they could pack into ktx2, but blender doesn’t support this as yet so that will have to be a secondary step after I test the method of packing into a glb.

Then I update the assets folder to include glbs for negx, posx etc and give each wall a fixed location and make sure the normals are correct, then load these in via code and test rendering it. It looks like the shader nodes in blender import for the floor import as just a simple image texture. not so for the helmet; there’s a bunch of nodes (about 10) some even appearing to be unconnected. I definitely need to understand shader nodes better, right now I don’t have an effing clue beyond the very basics. But as long as I make the texture on the walls shadeless I’m pretty sure the distortion created by the skybox conversion should do the trick and create convincing illusion of space.

1 Like

Didn’t get any further with the program today; but I had an experience that put a fire under my ass on working on this which further enlightened me about the reason I need to make this program that many will hopefully purchase or use that will help fulfill the wish of Ed.

I’m going to need to build a bit of cryptographic funkiness into my program under the hood but it will be worth it, even though many will never figure out the egg. But apparently I have some further research I need to do before everything that needs to be added to this flavor saver (or saver flavor) is gathered in the one place.

1 Like

Well, after several hours of playing with cubes and emissive materials with the textures painted on them, I imported it into my scene only find a crash. Apparently that game library doesn’t like materials using emission shaders, and I assume that means I’m going to have to create my skybox as a series of shadeless textures instead. So I redid the box setup - 5x5 cubes with z scaled to 10cm, and then each plane translated 2.55m on the appropriate axis, so that the “walls” of the room of 5 metres apart, and now I get to play texture doctor and see if I can UV unwrap all the faces on each pseudo-mesh and install an image texture on only the inner faces.

Got it working. Or at least as good as I’m going to get it tonight.

I ended up plugging the textures directly into the output to the material nodes since in Blender 3.5+ shadeless seems to be legacy due to how they redesigned the node system. Unfortunately from the ceiling at least the warping of the textures is obvious, I think this really needs to be a capsule or cylinder with the texture running around a curved plane, but its a start!

1 Like

Slow day with not a lot accomplished. I showed the skybox to my local housemates and at least they were impressed. I had to fix my PATH again due to somehow missing clang++ and the llvm compiler binaries from my PATH despite having them their earlier.

I spent several hours reading the code for the stage management, the ECS (entity component system), and information about the local and global transforms for different entities, it took me what seemed like an age before I could figure out that to move the location of the stage in global space to specify where the HMD should be positioned to start off, you had to get the local transform of the stage and update it using update_from_affine.

Given how I only started my journey into rust just over a month ago now, I can’t be too hard on myself not being at the point where coding it is a breeze. Spent most of my life working with mostly typeless or type loose languages where memory management was not the major concern, and thinking in terms of vectors and matrices takes some getting used to. It’s definitely giving the ME part of the scripting a good workout.

I did have an interesting conversation with a few devs today that makes me wonder if maybe the non sexual elements of WB are beginning to have their effect. Also on a dating site I used to frequent the other day someone viewed my profile, first time in ages. I remember when I was running those types of subs back in 2020 and 2021 the first short period I ran them at least that would happen consistently. However I’m not going to follow the view up; I cancelled that subscription months ago and I’m better served waiting until my tax return comes through and going out with my local circle to be introduced to people who I know are of quality.

Everything is kind of in this nebulous semi-waiting space until Monday when I take delivery of the two reasonably cheap items on eBay I bought to help me tune out any d-monic distractions and focus my spiritual and intellectual purity on just the goals and the code. I’ll keep working on the skybox but I don’t expect significant progress until I dedicate those items and begin carrying them on my person regularly and get more serious about my daily practice.

It’s a heady combination of subs that I’m gradually weaving together with all this bloom, but I think ultimately its the right one. Combine the spiritual, persuasive/magnetic and intellectual/technical aspects in a way that is unique to my personality and needs. I’m hoping the path should become clearer as I continue “hacking placidly” (Technolorata) :slight_smile:

Got the translation figured out to have me sitting/standing on the floor.

The globally oriented stage has it’s origin as the position of the HMD. The global world space will usually default the stage object to the position of the HMD relative to world space. Which means you translate your POV by changing the stage space translation. I translated down on the y axis by -2.5 and on the positive z axis by 1, which effectively put me on the ground one step back.

Next thing I needed to do was rotate 90 degrees clockwise. To do that apparently you set LocalTransform’s rotation component to a Quat constructed from an axis and a radians value. I entered the code and fired it up; sure enough I’m on the floor facing the window! Recompile to remove the z translation, and its the middle of the room. So everything seems to be working properly.

So far no more than 35 lines in my movement module including use statements and dead space. And I can still stand up and look out the window of the space station, even snap a picture. Examining the skybox from up close shows me where all my little inconsistencies in the design are. I now wonder if I can fix the transform of the walls of the skybox in code to get them rotated and translated properly, or if I should just do it in Blender. And my next test will be to build a few more sky boxes and see if the headset can handle switching between different sky boxes. I made a few AI skyboxes so I have a bit of variety to test out including a meditation retreat.

@emperor_obewan

How effective do you think running Index Gate alongside Stark would be for someone looking to start software companies and technology/engineering companies as a whole?

Based on your experience so far learning programming and design.

1 Like

It’s hard to answer that question because the words “how effective” and “as a whole” are very general. I can speak to my own experience and the rapidity of learning and try to give a balanced answer based on my prior experience, but I can’t speak to how it would be for someone starting out from scratch with less knowledge or experience than I had. The learning curve (which would be my interpretation of recon on IG, in terms of going from one level of ability to visually/conceptually integrate new areas to the next) and the time from start to delivery is going to differ depending on how much you’re learning from scratch.

I had previously written an accounting system in 50K lines of Perl code for example, and built dashboards using Powershell and Windows Forms that were still being used at the large call center I wrote them for when I left that company. I started in computers back in the days of Z80 microprocessors and had some existing knowledge of systems programming and assembly language. So I wasn’t starting out from scratch. Plus I had recently (December of last year) been using various subs to apply myself to learning threaded GUI driven Python development.

However, full blown Blender model design and 3D game design concepts / VR development as well as Rust are things that were completely foreign to me as recently as 2 months ago. I would estimate I’ve had no longer than 1.5 months working on my new business, part of that being very unfocused and I’m only just starting to be able to reach higher levels of focus.

I do think Stark is a good fit to work with IG, and I have used it myself, although I seem to have found EB to be more in line with my general modus operandi. But the aspects of innovation that Stark brings would help.

What I can say about my own experience with IG so far is to compare it to my experience last year.

Last year I learned Python and data analysis concepts over the course of one month to build a GUI driven program which I then refined over the course of another month or so to get to a full featured program. However, Python is a much simpler language than Rust. Also I was not learning other subjects at the same time so much.

This past 1.5m, I have gone from knowing nothing about Rust other than its speed and original systems programming focus, to:

  • Setting up my Emacs with rust-mode and lsp-mode/lsp-ui-mode to have Rust syntax checking and code completion, and begin to understand how Emacs and Rust integrate via the LSP engine.
  • Learning about toolchain management and pinning, dependencies, how to write a Cargo.toml file, what other config files you can create, etc., how to set up my dev environment in Windows with Android SDK and NDK in order to make it compile seamlessly.
  • Reading and understanding the source code of crates written by others, being able to make sense of what functions I need to use even if the documentation sucks.
  • Writing my own modules and implementations using lifetimes, understanding how to fix various compiler errors, how to deal with the borrow checker, etc.

Now, if you Google “how long does it take to learn rust”, most people suggest a year to feel fully confident with it. And you also have to understand, throughout this one and a half months, I have been reading documentation to get familiar with modules, procrastinating a hell of a lot by watching doomer stuff and videos that can take up hours of my day, and learning Blender and Emacs customisation at the same time. I wouldn’t say I’m 100% confident with it yet, but I’m far enough along that within the next week or two I’ll be getting balls deep in the code already.

So in terms of efficiency of IG to the task, I’d say you could quantitatively and qualitatively say my learning speed and technical proficiency has gone up like a character leveling up in WoW. I’ve had to learn new concepts like what is an affine transformation, what is a half space, how to think in different types of space (blender for example typically has Z pointing up, hotham and VR games tend to have Y pointing up and Z moving towards you), and be able to visually translate between them. I needed to refamiliarise myself with matrices and vectors, and so much more. To be able to do all that in 1.5 months is indication of its effectiveness in helping me to up my ability to grok new technical concepts.

How difficult it is going to be will depend on the difficulty of the subject matter of course, but I understand IG has scripting for a bunch of different technical modalities.

Hopefully this answers your question.

2 Likes

Great answer! I appreciate you taking the time to answer in such detail.

I’m basically starting off with zero experience in the programming world.

My goal is to become very proficient in both JavaScript (MERN) for developing websites. And basically becoming a complete master of all things Python for AI, software, etc.

Lots of work ahead. Thankfully, when I get obsessed with something and actually focus, I learn very quickly. The issue is capturing my attention and holding it.

Dopamine issues 🥲

2 Likes

Some days you just want to hit things.

I spent the final hours of last night puzzling over quaternion rotation and watching a video of what was supposed to be the stereographic projection of points on a hypersphere onto a 3 dimensional space before I figured out I could just multiply rotation by a fraction of the negative z unit vector to get travel in the direction of rotation.

Then I spent several hours fighting with the borrow checker because it did not want me to borrow the world or the stage space while its parent struct was mutable. I ended up having to construct my object for every frame I run the tick function instead. Hell, at least it compiled.

However, it turned out that the direction of rotation of the stage space did not include the rotation of the head mounted display. So I could move forward and back and regardless of where my head was turned it was still the same forward and back.

No problem, I thought, just multiply the two rotations together minus the y rotation of the head, then normalize the output. WRONG.

I just spent some of the most frustrating time I’ve had lately that I am not getting back flying through the floor of my skybox, and then high above it, despite the code being supposed to NOT move the stage space about the y axis. It’s SO, SO frustrating when you set both y components to zero only to find the damned fecking things come back again through multiplication. And I find myself moving in directions my head really isn’t pointed. Like I turn my head right and the damned thing goes forward!!

Ready to give up for a while and go watch cheesy videos. What a mind-f.

1 Like

Well, its Friday evening here. Thank God for that, I guess I will probably continue to plug away at this code over the weekend but I know I need to take a break.

I think I figured out the problems with my code but I haven’t had a chance to test it out yet. I had to reread the docs a few times to figure out that I had been doing this all a bit wonky.

Even though the stage object has no parent, it has both local and global transforms, and the global gets updated to the local by the engine on each tick, so I was correct in updating the LocalTransform. I was also correct in not updating the HMD object as it gets updated each tick by the engine from openxr’s description of where the hmd is in stage space.

The problem is the transforms. The really weird thing that is happening is that whenever I attempt to rotate the global stage space, it seems to rotate for a fraction of a second and then snaps back to the hmd seeing the same rotation. Which is weird because setting the rotation of the stage space should change the location of the hmd in stage space because they both exist in globally oriented space.

I have resigned myself to the fact that I’m just going to have to debug print into the adb logcat each fricking input event and see how the hmd orientation, stage space and so on are changing. I’ll get to it at some point over the weekend, I’m too stubborn to let something like this beat me.

1 Like