Semantic Neurons

10 min. read Submitted 28/04/2021 Last Edited 02/05/2021 #writing #programming #ai

An idea I've been having a lot of fun playing around with is this idea of little generative algorithms to build mapping functions. When we normally think about a neuron within a deep neural network, we think about this point within a hyperdimensional space. The dimensionality of this space is defined by the number of neurons in the next layer, and the position within that space is defined by the values of those weights and biases.

If we think about what this neuron is actually doing, it is forming a mapping between an input and an output. We store this mapping naively as a very large vector of weights. When we want to see what the weight is, we just look up its index within that big vector. But imagine if you were a young coding student, and you were given the task to write a function that maps some input to some expected output. For instance, mapping an input to it's square. Would you really implement your function like:


Egos Are Fractally Complex

8 min. read Submitted 19/04/2021 Last Edited 28/05/2021 #writing #philosophy

"All models are wrong, but some are useful" -- George Box

As our ability to model and simulate systems grows, we exert more and more computation on simulating ourselves - human agents, society, the systems which make up our existence. We simulate economic systems, weather systems, transport systems, power grids, ocean currents, the movements of crowds, and a million other models of our real world that attempt to predict some spread of possibilities from a given state.[^1] However, a model can only simulate the abstractable, and there is one object which remains resolutely unabstractable - the agency of egos.[^2] We may find ourselves immensely frustrated at this property in the future, and I believe it to be an insurmountable task. Here's why.


On 'Some Moral and Technical Consequences of Automation'

3 min. read Submitted 14/04/2021 Last Edited 17/05/2021 #writing #ai #technology

In 1960, Norbert Wiener - widely considered the originator of the concept of cybernetics - published a short essay entitled "Some Moral and Technical Consequences of Automation". Here's the article that got me there, which is mostly about social media and an abstracted reapplication of these concepts, but they tie in the article a bit.

I find myself facing a public which has formed its attitude toward the machine on the basis of an imperfect understanding of the structure and mode of operation of modern machines.


Why Democracy?

17 min. read Submitted 04/04/2021 Last Edited 06/04/2021 #writing #politics #question

I am an avid and radical believer in the systemic property of democracy. But if you had asked me (before I wrote this, anyway) why I hold such a strong and deeply-held belief, I would have been uncomfortable with the amount of cultural conditioning that would come to mind. I grew up in in the West where you are saturated with a nominally pro-democracy viewpoint for your whole life, and so it is easy to endorse it as an ethical axiom, as opposed to in support of ethical axioms. It isn't enough for me to just feel strongly in support of radical democracy - I need to be able to tell you why.


Why Computers Probably Will Make Themselves Smarter

5 min. read Submitted 01/04/2021 #writing #ai #technology

Recently, author Ted Chiang wrote an article entitled Why Computers Won’t Make Themselves Smarter. In this article, Chiang argues that concerns around a self-iterating Artificial General Intelligence (AGI) emerging as a superintelligence are unfounded.

We fear and yearn for “the singularity.” But it will probably never come.


Viewing 10 - 15 results of 40