On 'Some Moral and Technical Consequences of Automation'

3 min. read Submitted 14/04/2021 #writing #ai #technology

In 1960, Norbert Wiener - widely considered the originator of the concept of cybernetics - published a short essay entitled "Some Moral and Technical Consequences of Automation". Here's the article that got me there, which is mostly about social media and an abstracted reapplication of these concepts, but they tie in the article a bit.

I find myself facing a public which has formed its attitude toward the machine on the basis of an imperfect understanding of the structure and mode of operation of modern machines.


Why Computers Probably Will Make Themselves Smarter

5 min. read Submitted 01/04/2021 #writing #ai #technology

Recently, author Ted Chiang wrote an article entitled Why Computers Won’t Make Themselves Smarter. In this article, Chiang argues that concerns around a self-iterating Artificial General Intelligence (AGI) emerging as a superintelligence are unfounded.

We fear and yearn for “the singularity.” But it will probably never come.


Is a Self-Iterating AGI Vulnerable to Thompson-style Trojans?

3 min. read Submitted 25/03/2021 Last Edited 28/03/2021 #writing #ai #technology #question

In his 1984 lecture "Reflections on Trusting Trust", Ken Thompson (of Unix fame) speculated about a methodology for inserting an undetectable trojan horse within the C compiler binary that would self-propagate throughout all future versions. (Additional good video that got me thinking about this.)

The replacement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user.


Why Aren’t You Scared Of What Sent You Here

7 min. read Submitted 04/11/2020 Last Edited 19/02/2021 #writing #ai

We are currently dedicating a huge amount of technological brainpower to the manipulation of human behavior via technology. I cannot, in any terms, stress enough how bad of an idea this is. It’s a stinker. It’s a really, really, bad idea to teach AI how to manipulate human behaviour. I want to discuss these Behavioural Modification Artificial Intelligences (BMAI), how they are increasingly running our lives, and why you should care.

AI is on an exponential trajectory of growth in terms of its complexity and its ability. Not only will AI be able to accomplish increasingly complex tasks over the next decade, it will continue to absorb more and more of humanity’s information and data. It is obvious that we must be very careful about the things that we are asking AI to do. We need to always ask: what would be the consequences of an AI getting a million times better at this task? That scenario is entirely possible. The growth of AI continues to rapidly accelerate, with new developments coming thick and fast. The idea of the singularity — triggered by reaching a point where an AI can consistently improve upon its own design without human intervention — would in some scenarios create a superintelligence many times more intelligent than the entire human population put together in less than a year’s time[1]. But even without the spectre of the singularity haunting us, AI threatens to become terrifyingly proficient at behavioural modification.


Viewing 0 - 4 results of 4