The Road to Alexandria (IG: UPX)

This will be an occasional record of my work of my work with IG: UPX and the technologies and insights I find as I learn them. I’m not considering this as a formal journal; as I’ve said elsewhere I’m not going to be journalling any of my personal life on this site for several reasons.

However, the speed at which I’ve been learning as I begin my business journey has been pretty unexpectedly high and I felt it would be useful to document the journey, without disclosing any intellectual property or personal information.

The basics: I’m using IG: UPX but it is not my sole program. I run multiple ZP and non ZP along side one another, often at many times the recommended number of loops per week, recon has been minimal although it is difficult sometimes to separate recon from everyday tiredness and ordinary bodily inflammation/muscular tension and ordinary self doubt that comes up from time to time.

The actions I’m taking at the moment are preparatory work towards the program I’ll be working on first for sale on a VR app store or two, which I’m code naming Alexandria. I will not be discussing the purpose of this app here or its USPs; I don’t need any unwanted competition in the market.

Building a VR app, game or non game, is a pretty involved process. You need VR assets with a poly count appropriate for the platforms you are developing for; you need to understand the limitations of the platform your app will run on; you need to understand how to cross compile the app on your development machine and deploy it remotely to your headset for testing, and know what Platform specific (eg Android) or VR features its going to rely on like head or hand tracking. You need to consider audio cues; you need to consider your engine you’ll be using to interact with the Vulkan or OpenXR API, and so on.

So far my determination has been that game development and ordinary app development in VR are quite similar work flows, except with most apps you won’t have to be quite so concerned about how performant your scene is going to be since you won’t have potentially hundreds of NPC characters to render at once. But in both cases, you need a game engine to generate the virtual world and interact with it in a realistic physics driven way.

I’ll update this periodically with details of the technologies 'm learning about and my thoughts on what I’m learning and more specifically the speed at which I’m acquiring new knowledge.


The journey so far:

I won’t go into detail on my work in learning Rust. I downloaded some bootcamps and crash courses on 23/5 and by 04/06 was far enough along in my learning to be able to start analysing the code of the scene examples in the hotham library and figuring out how a program compiled for the Oculus Quest 2 was compiled and what each of the settings in the Cargo.toml file did. I understood the difficult concepts of borrowing and ownership of memory in Rust pretty quickly and why the various different types exist, things such as pinning and boxing seem pretty ordinary and self explanatory; the need for pinning when spawning multiple threads and the need to work with the heap instead of the stack to ensure variables have a known place across multiple different threads makes a lot of sense.

I’m also impressed by the depth of the support for asynchronous networking in Rust; libraries like tokio and quinn make it pretty easy to write multi-threaded client server architecture with minimal lines of code and still know that it is safe if you follow the proper design principles.

What I have been focusing on this week and increasingly since Wednesday has been the assets development process and learning to use Blender to develop VR assets.

Although there are plenty of paid and free assets available online, I knew early on that ultimately for the Quest and for the purposes of my app I was going to have to develop my own assets. The Hotham github has a number of example assets it uses in its example code, but they use git lfs on the repo to manage it due to the size of the assets files, and as yet I am clueless on using lfs to fetch the original asset files. Anyway, its boring trying to compile someone’s original code from scratch, you don’t learn anything unless you can tweak the code yourself and get it compiled and running on your own machine.

So I decided the first thing I need to do to accomplish my goal of compiling some example basic scenes with head and hand tracking was to develop a few basic assets of my own. This led into the question of what poly count is right for assets to be used on different platforms. Blender PBR textures you find online are typically 4K, although it can vary. But using a lot of 4K textures in a program on a headset is stupid and isn’t going to work. Number one, what a waste of memory, do you really need such a high resolution just to texture the walls of a room or a floor? I did a bit of digging in places like Reddit and confirmed that many game textures are much less than 4K. You end up needing to trade off between memory for textures, poly count, and the number of FPS you need in your program.

Of course, if you create objects using sculpting, you normally sculpt in extremely high poly and then bake the final results into things like normal maps, ambient occlusion data and so on. And then all of that needs to be saved or exported in a format that your rendering engine can use (like GLB or GLTF). I found out very quickly that baking uses a lot of CPU/GPU, especially at high poly counts. Even just solidifying a mesh plane containing a few million polygons, my machine looked at me and said “you’ve gotta be joking”. So bottom line, month two of my business I’ll definitely be investing in an ex corp machine with a decent graphics card, of which there are many under $500 I can purchase, some even with an NVIDIA GPU inbuilt.

Now the stats: I’ve gone from basically knowing nothing about Blender except the basics that were still the same over 10 years ago, to being able to generate pretty complex meshes, work with alpha texture brushes, create my own alpha textures with Materialize, do basic texture painting, and handle shaders. all within the space of five days or less.

I also found and downloaded material to understand how to create game assets which I will continue working with over the next week. There’s well over 20 GB of videos, probably around 30 in fact, which I’ll have to follow along with over time to continuously improve my workflow. At the moment I’m working on a machine with 8GB RAM and no GPU, which makes high res texture painting a living hell.

My goal is by the end of this week to be creating some character or prop assets to import into a simple hotham scene and attempt to tweak the Cargo.toml to be able to compile and install a simple scene app to my Oculus, maybe with a variant of that texture painted mushroom in low poly density at the center of the room. With my own custom left and right hand assets if I can manage it. However, time will tell if this is over ambitious or not. I don’t want it to take much longer than this, as I need to have started my marketing prep work before the end of this month IMO to continue to be on track with my business plan.


While picking apart the code for hotham a few days ago, I learned about the QUIC protocol ( QUIC is a UDP based (and thus connectionless), stateful encrypted transport protocol which allows users to “bring your own I/O”. HTTP/3 runs over QUIC and implements RFC 9114, which implements HTTP in a way that, when you read the RFCs, does leave the door open to allowing self signed certificates in certain cases, if you read section 4.3.4 of RFC 9110 paragraphs 4-6

I was considering network implementation in python for the server component of a future app and found this library:

This is excellent because it gives me options. I am still divided over whether to use a plain in-source-code Python implementation for the server, or whether to develop a separate Rust based component which then simply communicates with the main python open source component via pipes or an IPC equivalent. Realistically, having a separate Rust component which communicates with the main program would be more flexible since the Python could then transfer the MPL data, rather than needing to spawn off multiple other threads to handle the asynchronous multi-channel QUIC communication.

But I guess we will see how it evolves/unfolds. I like the idea of a Rust program that can receive the data to serve via QUIC from multiple different sources and update them on request, much like hotham’s asset server, so that I don’t have to rely on my open source component as the sole arbiter of the visual data it will be passing to the CS interface.

The whole reason I got very interested in QUIC as soon as I learned about it is not just the encryption, which is important. The reason is its connectionless nature and stream multiplexing. That means that if the Wifi drops on the VR headset (as it frequently does), communication with the QUIC server component streaming the information to construct the scene will not be interrupted, and there is reduced overhead due to the lack of a need to establish a connection to the server first. Headsets like the Oculus Quest 2 are limited in terms of the options they have for network connectivity and so if we are being forced to rely on a Wifi internet connection, connectionless data transfer is a MUST.

1 Like

My pretty horrible work so far on sculpting heads. Unintentionally ended up looking a little like a president of a certain country. Let’s see how much better I can get once I watch a few more tutorials.

1 Like

So this still looks horrible for the number of polys. But the main progress today was to start understanding the brushes in a procedural way rather than thinking of them like real sculpting brushes, because they are really only like art brushes in the most cursory way.

I kept using grab brush and snake hook etc and wondering why the hell they weren’t operating as advertised in the tutorial videos, even when I was using a pen. Understanding that without dynotopo the brushes really just move verts around and don’t create any new topology and trying to visualise that in my head brought things a little closer to reality. Also, the poly count is really the key to sculpting fine detail, but while you’re building up form the less polys the better, and remeshing as you drag things out is the key to building the full form.

With this one I made a head first and then added a cylinder and joined the two and then suffered for it because I had to use smoothng and dynotopo until the mesh was contiguous again. Then I started experimenting with masks and the inflate brush. Creasing and smoothing to divide the cylinder into two legs. Now that I think about it afterwards no wonder it turned out butt ugly. But its a learning experience.


Starting to feel a bit like Einstein due to a few things that have drawn my attention to my level of skill. Sometimes in trying to solve ever increasing levels of difficulty in terms of problems, it’s easy to forget how far you’ve come. I have to remind myself to be grateful for the several months of python learning I got in back in Nov and Dec last year, and my original learning of the Z80 microprocessor and Microworld Basic back on the old Microbee systems.

Also, just looking at the number of courses, torrents, books, video tutorials and so on that are out there on digital sculpting, procedural textures and so on, starting to realize how much money is in it. When someone can sell a procedural plants pack or a few Blender tools bundled together for over 100 bucks, and the topic is so popular there are multiple market place sites for things like procedural fur or procedural grass etc, I’d be a fool not to consider making a few bucks off my skill in the area once I’ve beefed up my knowledge a bit. Especially given that you can use python to automate parts of the blender workflow.




Getting better at drawing the head. The back part is always the hardest :stuck_out_tongue: and the chin and lips and cheekbones took the longest time to figure out.

these models remind me of mac from always sunny after he eats the nuts:

hey! be nice now little boy, don’t make me comment publicly on your code quality :nerd_face:


:joy: i meant no offense good sir.

Continuing on with the actions being taken and some thoughts about my stack.

Action wise, now that I am getting towards proficiency with making props by sculpting, I’m focusing on retopology. I’m using some videos from the same series I’ve been watching about designing game objects to understand how to manually retopo my meshes. Practically speaking, it is using the snap to face, assigning different colors to the new mesh and the old mesh, using a few other blender settings to display the new mesh on top of the old one, and extruding new faces while using a shrink wrap modifier to keep the faces aligned to the underlying mesh. The idea is to focus on creating quads instead of triangles where possible and keeping the mesh as wide as possible without sacrificing significant detail. You close the loops by creating a new face with the last points and the first points of the loop.

This is one of the more tedious parts of game asset design, designed to reduce your poly count so that if you need to animate the mesh, or calculate normals and quickly orbit around the object etc in 3D space, you don’t end up killing your computer (or your VR headset).

I need to get some practice in doing this for when I create my assets for Alexandria. Then, I’m going to move on to lower res texture painting, UVs, and baking, then finally exporting the glb and gltf files for the game engine. Then I can test compiling the sample code for the headset, exporting it to the file system and running it in headset. Then I will be far enough along to get started with content creation for my channel.

Meanwhile, I have been getting very distracted with the other things going on in the world that could cause issues for my business plan; especially in Europe and North America. So I’ve been periodically running EB to ensuring my subconscious has the force and right alignment to pull me back on track when I need it.

1 Like

So this is the pain of retopology. Ugh! Barely 10% done, but learned some important lessons already. First, dont try to extrude two edges together expecting them to magically merge. Instead you end up with two edges layered over the top of one another and duplicate vertices which take forever to clean up :stuck_out_tongue: Second, apply that shrink wrap modifier BEFORE you start extruding verts and pushing them around! Otherwise you end up with all sorts of hidden edge problems you need to grab and move a tiny fraction to re shrinkwrap.

An hour or two later, after deleting a bunch of duplicate vertices and edges, at least my new mesh topology is somewhat sane and I know when and when not to loop cut. How to subdivide an edge and join vertices as edges. Won’t be making the same mistakes twice over. God what a painful process, no wonder people pay a ton of money for programs to automatically do this for them!

1 Like

Found while preparing this morning for continuing retopo work.

okay, now this is just starting to look too easy. Thoughts to myself… polars + hotham versus python+vaex/blender core libraries+openxr. how does polars translate across to arm architecture? and what apis can we access from within blender? is there a pip equivalent? With appropriate textures and base scene setup can we dynamically generate meshes for use in engine? hopefully the answer is yes. how much gpu/cpu does a polars query take at different scales? can we stress test increments of 1GB to see how it performs? can we combine this with on-PC streaming of data content or HTTP/3 delivery of CSV content to headset storage? Many questions needing to be tested out. Look closer at the sample code and API documentation before going further down the rabbit hole.

Not much to report achievement wise today. The notes to self earlier today show that I found new tech which I’m considering where it fits into my architecture for one of my two programs. I also found more content to help me with my sculpting learning and retopology. But I didn’t actually apply any of that majorly today, other than to test out the retopo program on my mesh from the other day to find it yielded a pretty low poly version that might be lacking enough detail to use in any program.

The one thing I will say is I found myself thinking about the results of the spiritual titles I run periodically which reminded me of my strong connection to the name of the Living, aka 18. Connecting that back to how I drafted my business plan and my position on VR and AI/ML, and particularly my uneasy relation with the latter, both need to be tools in service to life, not a replacement for it. And so the apps I develop need to uphold that aspect of the business vision and mission statement where I focus on solution-focused thinking rather than modern ideologies. This is deliberately vague since this is a public thread, but it is just pointing out to myself really that I have been thinking about the inspiration behind Alexandria and what I need to do to make it a reality and have it be on purpose.

1 Like

I’d already seen a few videos by this guy, came back to the channel for the retopo course which is pretty detailed, but then noticed in the playlists his sculpting for “beginners” playlist. Really enjoying those videos… he explains his process a lot better than some of the other people I’ve watched, and his emphasis on certain aspects is quite on point (“reeallly big brush size…” etc, emphasis on the point you pull out with snake hook, a lot of points where I had problems starting out). And damn, that first video, I never realised how much you could actually do with snake hook on constant mode.

Not much work accomplished today. This over-exuberant biker dwarf is all I managed to accomplish before writing the night off. Still running up against the ever fking limitations of my nervous system and not understanding the tools or the viewport well enough to work without constantly screwing up. I keep telling myself it’ll get better.

1 Like

@emperor_obewan lol what are these CRAZY character models you are creating? :rofl:

Call it evolution I guess. The human body is one of the most complicated forms to sculpt in 3D. My aim with Blender is to be able to quickly extrude the basic forms and shapes and then sculpt the finer detal in the shortest possible time, and be able to master the different brush types and the process of creating a model quickly and accurately. Ie if I can create a humanoid quickly then less complex shapes like a table or chair should be child’s play.

Having said that, I’ve been working off tutorials that don’t cover all the details. I realized this morning (assuming you can call it morning yet) that I should have been making use of the select lasso/circle, extrude and loop cut/subdivide tools in some cases where I was trying to use the clay strips or other drawing tools because some folk claim to “teach” you to use snake hook/clay strips and dynotopo to extrude form when they’re only really useful in limited cases. They’re like “oh you can just extrude a neck like this” snaps fingers, then you try it and it makes your mesh a broken mis-shapen mess.

So my idea was to go for speed and efficiency because I don’t want to spend countless hours and days working on my VR assets if I can avoid it. Then learn materials and lighting and export enough glb models to create the proof of concept.


Things going well tonight. I managed to figure out downloading the missing glbs and installed cargo-ndk, set my environment up for a compilation hopefully tomorrow. All crates downloaded in case internet goes gaga. Feel like I’m actually starting to understand the way this all slots together.

Made sure all targets are installed in rustup… got the headset connected via adb… lets see how this puppy flies.

Also… this video was pure gold and I had a good laugh because its true! Dependency hacking… thats one of the best ways of putting the problem of modern day code I’ve heard recently. More than a few snorts and sniggers there.

1 Like