Recently, I joined the Github Copilot Preview which has been of great interest to the software and AI community since it's release, promoting discussion and criticism from many corners. In a recent interview with Yuval Noah Harari and Audrey Tang, Yuval mentions something that stuck with me about the decisions embedded within how we program:
Social reality is increasingly constructed by code. Somebody designed it so that on the form, you have the check "male" or "female", and these are the only two options. And to fill in your application, you have to pick one. Because someone decided that this was how the form was, this is now your reality... And maybe it's some 22 year-old guy from California who did it without thinking that he is making a philosophical or ethical or political impact on the lives of people around the world.
This weekend, I did the GMTK Game Jame. It's a 48 hour event where people make games matching a theme. This year's theme was "Joined Together". I made a game called "Cold Weld". I used the little Unity tool I've been tinkering with that I just wrote a post about, voxul.
If you like it and you're reading this before the 21st of June 2021, you can vote on this game here. If you didn't like it, er... forget that last sentence. Play it below:
Introducing voxul, a voxel system and editor tool for Unity 3D. Use it to build voxel meshes, objects and levels.
The best way to demonstrate this tool is through showing its use. Unfortunately, I'm a much worse voxel artist than I am a tools programmer, and so the demonstration is but a shadow of its potential in my hands. I'd love to get it in the hands of someone a little more artistically talented and get some feedback. This is open source, and there is still plenty of optimization, polish and work to be done.
An idea I've been having a lot of fun playing around with is this idea of little generative algorithms to build mapping functions. When we normally think about a neuron within a deep neural network, we think about this point within a hyperdimensional space. The dimensionality of this space is defined by the number of neurons in the next layer, and the position within that space is defined by the values of those weights and biases.
If we think about what this neuron is actually doing, it is forming a mapping between an input and an output. We store this mapping naively as a very large vector of weights. When we want to see what the weight is, we just look up its index within that big vector. But imagine if you were a young coding student, and you were given the task to write a function that maps some input to some expected output. For instance, mapping an input to it's square. Would you really implement your function like:
Charlie Bones' Interdimensional Help Centre
This is my entry for the Global Game Jam 2021! I did it all myself with public-source assets for things like audio and textures, combined with a little voxel engine I've been building.