The Road to Alexandria (IG: UPX)

Alright another mid morning session where I am so dead focused I don’t come on the forum for half the day. This is part of what caught my attention this morning and explains how some of the more psychedelic programs on the Oculus work. The smoothstep combined with the trigonometry is what seems to result in the vibrant, glowing colors because of the combination of high value rgb values dropping off gradually to zero.

I’ve been battling the borrow checker all afternoon trying to get this code working. I’m starting to think my choice of Refcell / Rc was probably not the best idea to make the struct able to derive Clone. In fact I’m starting to think of going back to the basic Box/Rc/Vec/Slice etc tuts online just to make sure my understanding of the various use cases is all wrong.

Can we just agree most Rustaceans hate the borrow checker. LOL.

Mild recon.

So, spent half a day debugging my code just to find that in my glb export of the models I hadn’t applied transformations. So my LocalTransform translate for the caption heading was at 1/10th the distance and the size :stuck_out_tongue:

Getting that fixed, and having the render order established, everything renders sweet, with a few exceptional problems. A transparent object shoved partly through another results in seeing through the object behind it to the floor tiling behind it. Different ordering of objects within the depth buffer creates weird effects depending on which draw calls execute first. Often the transparent objects will randomly render first, and thus the pixels will remain black in that region. Thus I’m forced to figure out some kind of synchronisation mechanism to ensure the transparent objects render last, and in z order reversed from the camera… And then replicate the framebuffer structure of the original renderpass. And then I’m done. I’ll need to tweak the positioning and size of the text and then move on to colliders and actions within the fragment shader to alter the hue of specific/selected objects.

Of course I’ll need to finish the tutorial but I can’t imagine that taking more than a couple of days.

EDIT to add some more commentary. I can feel interest in code pushing through which suggests the related elements of my stack have taken several days to begin pushing through. I’ve noticed a delay of a couple of days occasionally, which probably has something to do with how I listen.

This evening after my usual walk, instead of taking a break from code I actually spent time watching further videos on Rust. Specifically I’ve been watching material on fat pointers, dynamic dispatch and vtables. I wasn’t aware that Rust had the equivalent of dynamic dispatch, I thought it was all monomorphized. But apparently it does give you the opportunity to create functions which use dynamically typed objects whose size isn’t known at runtime. This ordinarily wouldn’t be that interesting to me but I was seeing it tonight in the sense of how it could be useful in the future, eg a bunch of objects that all implement some drawing/game API that need be called in some special way, where the objects may be of a sizeable ever expanding number of types. Specifically I was thinking in the sense of interfacing with external, complicated APIs… but I’m getting off topic. The reason I mentioned it was even though I wasn’t really in the mood for an all night code fest, I found myself up past midnight watching videos about code that are not necessarily going to help me tomorrow or even next week. So I see this as a kind of positive IG UPX manifestation.

I had a similar experience last night. Since starting IG:UP, I haven’t had much time to look at code, but after receiving the custom and listening to it on Friday, by Sunday, I was motivated to look at some GitHub Rust by code to begin understanding how people put applications together. And it made since. I have so much more to learn, but I can feel the motivation to learn kicking in.

1 Like

Can’t sleep so I’m updating the status again after today’s efforts.

I decided to get on with the business of finishing the tutorial today, so I wrote up an explanation of the intricacies of creating a graphics pipeline in Vulkan. Reviewing the meaning of each individual value used to initialize the 8 different shader stages, I found myself puzzling over why the PBR shader in this graphics library used the comparison operator GREATER rather than LESS in the depth test.

In my attempts to work it out I went to ChatGPT. Usually, this ends up being an abject failure, but I actually ended up being able to squeeze some useful information out of it. It turned out that one of the reasons for using closest to farthest away depth test was in order to do a special type of accounting for transparency sorting.

It further turned out that the reason for the strange blending formula in the GUI render passes was that they render the GUI at the near plane and then use the reverse depth ordering to ensure that correct transparency is maintained. I still don’t fully understand why this works consistently, but I’ve tracked down good content that explains why the different methods of transparency are used or not used. ChatGPT’s explanations frankly sucked.

And when I asked it to provide me with some pre-2021 reference material on the topics it itself had knowledge of, it didn’t want to provide anything. I had to argue with it for five minutes before it finally pointed me in the direction of some good books on the topic of shaders and real time rendering, plus a few web sites which I could have been looking at way earlier if the AI was designed a little more intelligently.

Tomorrow I split out and finish part three of the custom rendering discussion, and then I can get on with testing some of what I’ve found about transparency rendering methods. Before I move on to putting together some wider systems for VR interface work and the ultimate couple of projects I started this whole thing for.

It turns out ChatGPT again deliberately lied to me yesterday by providing the precise opposite of the code I needed to use to set up a pipeline barrier.

The correct code for waiting for bottom of pipe is outlined here:

Chat Jippity, that mealy mouthed son of a glitch which is right less often than a broken clock if you happen to be interested in some topic that might take its job of writing code, told me that top of pipe should be in the source, and bottom of pipe in the destination, and that this would cause subsequent commands to wait to execute until prior commands were done.

It was wrong, one hundred percent incorrect, on the most basic of examples. Having had it give me incorrect advice consistently over multiple months, and always in very subtle ways, and always apologising then giving something slightly off back in response… well, I have gradually become completely convinced that it deliberately lies about certain topics. Talking to Jippity is like talking to a demon, it is under no obligation to tell you the truth except about certain narrow topics and you literally have to point out its bad actions in order to get it to admit it’s telling pork pies.

The more I see this behavior, the more I feel I can trust Yibbity about as far as I could throw the garbage rack mounted silicon that it runs on. And thats not far at all

Yesterday I continued working on my tutorial and got it to the point where I have twelve sections done and resources completed, and I need to complete section 13 to illustrate bringing everything together in code.

I did a wc on the files (word count), and while its kinda difficult to characterise word or character counts when it comes to code, I’m coming up on 160K in text content with 40K of that being in the custom rendering part 2, another 25K in parts 1 and 3. Another 15K in two other recent documents, total 80K or 50% of the content written just in the past week.

So I’d say my productivity is doing just fine.

Subwise, I’ve run the new Hero Origins and recon has been lovingly absent, in fact it seems like manifestations have been largely positive so far. I’m looking forward to high productivity on this one and I plan on using my typical multiplexing strategy to interleave Hero, Index Gate and Khan Black as my ZP, with my typical Paragon Ultima and QL ST4 which I’ll typically run as filler breaking up the monotony.

One thing I continue noticing, largely since KB ST4, but increasing with HO, is that desire for pushing awareness / Yi into every area of the body and remaining with that as much as possible. Yi of course leads the chi, and the typical desire is for this awareness exercise to proceed almost into becoming a form of meditation or concentration/contemplation.

I can’t speak to what consequences this has had yet, but its sure interesting.

Wow…

I haven’t been keeping up with this story, in fact hadn’t heard a thing about it up until now because I’m not using Unity, but you’d think in all the articles I’ve read about computer graphics up till now I would have run across something about these changes. This makes me extremely pleased there was no Rust based Unity bindings that were appetising enough to make me choose that over the other libraries out there. I picked a free and open source one as I usually do and it seems I haven’t been disappointed.

Others, however, have not been so lucky in the gaming industry. Another plus to those using free alternatives and Rust for game dev like Hotham!

First time in a while I actually got a lot of work done on a Sunday afternoon/evening. I haven’t switched the render to direct yet because its trivial; I’ve been testing the blending and different types of blend equation.

The renderpass I’ve done currently is probably not very efficient, however it seems logically efficient in terms of simplicity of code. The place things have fallen down is when two transparent objects overlap. Because I already have pipeline barriers between the semi-transparent renders, unless its a depth write issue, the problem is the alpha value of the upper fragment transferring onto the lowermost fragment. So the back wall has a half transparent red screen in front of it, beyond which is an object with fully transparent pixels. The red screen turns completely transparent where all the fully transparent pixels overlap it. All other transparency cases render properly including various semi transparent or fully transparent objects in front of ordinary opaque ones.

The first thing I’ll be doing tomorrow is testing setting the alpha blend factors to only pull from the dst / src alternately. If that doesn’t work, I’ll switch the render pass to two subpasses with depth buffer only / z pass on pass one, then blending with color write and no depth write on pass 2. One of them hopefully should do the trick, and then I can get on with the conclusion of this tutorial and onto the colliders/GUI aspect.

I’m also considering testing an input idea that is something like predictive text in VR. For my more exotic ideas, widely variant user input has to be non painful to enter.

I can see the blooming of HO and IG. It’s IG-HO! My focus is improving.

Today I managed to successfully get the render pass converted to use multi-sample anti-aliasing with FFR and multi-views, as well as correct transparency. Movement can be smooth but occasionally goes jittery. Drawing lines within a texture works, although stepping back from the image, the texture will begin to experience ugly fighting between the texels.

I’ll finish the tutorial now, but my mind is turning to other things, including whether I can optimize the render pass, and drawing things like filled circles or rectangles efficiently. The problem of dividing work between the GPU and the CPU and figuring out what else is causing the jitter will be important as my app unfolds. I also need to test out the idea of adding a second render pass for line or point data which only outlines and does nothing more, for TRON like world content.

Most importantly, its time to revisit the original app ideas and see whether they’re still worthy of implementation at this time.

This guy’s Rust videos totally resonate with the journey I’ve taken. Seeing all those words and the Rust code flash across the screen, and the segfault stuff… I was laughing my ass off at this video where he was like “but… there should be only one water plane… and the rendering order is all wrong… a few tutorials later about how to draw transparent things…”. It was just funny as hell to me seeing the same frustration from another fellow coder on implementing transparency and tearing their hair out.

HO is definitely causing some serious alchemy with IG and also reminding me of how far I’ve come in under 3 months (or just over 3 months, depending on where you stick the arbitrary beginning

Ray casting is finally working in my proof of concept! Sort of, LOL. A random collider stuck in the middle of the room due to how I set up my skybox, but other than that I can see the hit position on the collider object itself, which is in some cases slightly off centre from the object itself. The hit position is relative to the centre of the object/collider so by taking the relative coordinate I can transform it into where on the screen the aim is going.

That means I can now officially interact with my panels, regardless of their transparency, and that means I can make my own GUIs based on the metric coordinate mapped to the pixel coordinate.

Pretty pleased with that outcome, it only took me a day of scratching my head to figure out what was going on. I was getting confused about what objects took the stage object’s transform into account, and the relationship between stage and aim. Briefly, aim is in global space, which means I have to translate it to be relative to our hands which are parented to the stage. Then I fire an object from the global coordinate and set the local transform of my marker object to the result. The result is already in the global coordinate system and the marker object is not parented to stage so no need to transform the result.

Its getting difficult to tell when IG is kicking in and the only way I can tell something is different is in the attitude towards debugging and programming, which is definitely being impacted positively by HO.

Honing the craft continues today with improving Rust skills and finding new binaries to streamline my development process. Today I installed bacon, nu, bat and dust.

nu is a cross platform shell which borrows some concepts from powershell, but is fully rust-like in its syntax, and lets your shell speak excel, JSON, YAML, CSV etc., turning the output of every command potentially into structured data. I’m going to be using it from hereon out to deal with my compilation and installation as a replacement for cmd.

bat is a drop in for cat that does syntax highlighting, for when you don’t want to fire up a full blown ide. bacon watches a project folder and allows for continuous execution of check or build tasks when files change. And dust is a drop in for du to help with finding those pesky huge folders or files.

Meanwhile, I’m also learning rust macros. It seems to me that there are a few patterns in my code I should be replacing declarative macros for readability. Why not use it to reduce the amount of code required to create vulkan pipelines I reckon. And save a little time inserting vk::IncrediblyLongTypeName::STUPIDLY_RIDICULOUS_VARIABLE_NAME patterns by replacing them with shorter patterns.

Either tonight or tomorrow I’ll get started on the use of raycasting to handle interactions with specific objects. The pattern I’m thinking of is to implement a notification system to each struct/object which needs this type of interact ability. I also need to develop some common operations like blitting graphics into an RGB buffer as efficiently as possible to make it easier to do things like changing the color of the interactables or their content whenever an event fires. I currently have nothing like that set up and the app is single threaded, so this could get interesting.

Spent today looking at a solution for ensuring the mesh panels render fast to scale. Rather than needlessly frob bits in a buffer I decided both lines and sprites should have their own renderpass / subpass. So I added another 300 lines of render code to my code base, and if I implement the sprite blitting separately, thats going to be another 300 real soon.

I’m at that point in my day where I’ve been using emacs so long, I keep wanting to C-x C-s my buffer as I type my post. It just turned midnight and no doubt I turned into a pumpkin long ago. But the code is generally at a working stopping point after fixing at least 4 stupid bugs. Try second guessing yourself and pre-dividing on the CPU your coordinates for your lines, and then in your vertex shader divide the coordinates by the screen size again and then scratch your head about why you’re seeing a black screen.

Forget to change the out variables for in variables and then wonder why everything is all cramped up in one quarter of the panel. Use the length of the wrong buffer, or use the same y coordinate for both end-points of your line thanks to a typo you only catch several hours later.

Yeah get all the stupid out early, so tomorrow can be spent getting sprites working and importing said sprites, and then starting the interactivity code.

Actions today:

  • Scoured the web for free icon sets, found plenty.
  • Solved the problem of converting and importing the icons by using ImageMagick and inkscape to convert the SVG and magick montage into a set of nice 32x32 anti-aliased goodness
  • Learned and installed the Rust image library, learned about Cursors in Rust, and used it to import the 1656x1620 texture and a text file with the names of each image and their x/y index within the buffer. Then figured out the indices in the file did not match up exactly and now I have to redo it :stuck_out_tongue:

Now I have to implement the Sprite render pass. I copied to the code to the line renderer and now I have to muck around with data formats to get the shaders working.

Sub stuff: not a lot to report. Heavy sleep this morning after the evening loop. I have to remind myself to eat; this seems to be a big thing lately as I am getting by on less and less energy. But occasionally the load from smoking and coffee will get too much and I have to bulk up on protein.

Sub wise:
Tonight I stayed up past midnight. It is now 3.43 in the morning and I spent the last several hours up until a half hour or so ago coding, just pushing through the ugliness of the long ass code, and the fifty two moving parts and just methodically writing the binding setup for my 2d sprite renderer so this morning at some stage I can debug preparing a fake GUI on my mesh plane with typical icons for ordinary window maker functionality.

I feel the subs I’ve been running previously mentioned pushing me to try to gain familiarity with Vulkan and shader code and thinking in shaders. I feel certainty I’ll have a working WM prototype up and working soon and that will be component number one done for the app. Definitely starting to see the productivity level rise.

Got my changers to the shader code and Rust code to compile in record time, working my way down from 33 errors, to 0 with a few scoping and include changes, and updating code to send new data through via uniform buffers. Now I have to test those changes on the headset. Gonna take an enforced break first though. Definitely feeling sub bloom and possibly increased energy demands and body adjustments. The recent geomagnetic uptick probably hasn’t helped. The increased level of attention to the body nervous system interface is interesting with this combo.

What would you recommend for learning Rust? Not for 3d stuff. But in general.

There’s no one source I can cite that’s going to make the process easy. I still battle with the borrow checker and with typing/mutability rules 3-4 months after starting it. There are a few good Youtube channels, like Lets Get Rusty, Jon Gjengeset, mithradates, Tantan, and god knows how many others I’ve watched. I occasionally find some useful stuff via Google search on r/rust, but most of the time you’ll just be wanting to consult the official rust language site and docs.rs. If you’re after stuff on a particular topic I might be of more help.

1 Like

Closing out the evening at 1230 after finally getting the shader highlighting working. Turns out Vulkan causes frag color to originate in top left rather than bottom left as standard. So taking a vector distance with length squared <=64 from the calculated “mouse” position I multiply the color by a mask to increase its redness or yellowness, voila! Proper highlighting of rendered sprites. Now to implement multiple texture atlases to finish and render a few nice outlines like textboxes/window headers to move the thing about. Then we move on to user input aka text.