Recently, I joined the Github Copilot Preview which has been of great interest to the software and AI community since it's release, promoting discussion and criticism from many corners. In a recent interview with Yuval Noah Harari and Audrey Tang, Yuval mentions something that stuck with me about the decisions embedded within how we program:
Social reality is increasingly constructed by code. Somebody designed it so that on the form, you have the check "male" or "female", and these are the only two options. And to fill in your application, you have to pick one. Because someone decided that this was how the form was, this is now your reality... And maybe it's some 22 year-old guy from California who did it without thinking that he is making a philosophical or ethical or political impact on the lives of people around the world.
People With Privileged Lifelong Access to Trustworthy Public Institutions Confused Why Anyone Wants Cryptocurrency
Citizens of immensely rich nation-states with historically stable and trustworthy public institutions expressed frustration and anger today at the rapid growth and growing impact of cryptocurrency.
"I just don't understand it all," said Melbourne resident Mary S. "I can go to my bank any time and deposit or withdraw good old-fashioned hard cash, as can literally every other person on the planet. And if you don't have trustworthy institutions around you, you should just wait for your government to make them for you. It's called pulling yourself up by your bootstraps, people!" Mary produced a $20 bill to demonstrate. "This is money you dumb-dumbs. If you don't have any, just remortgage one of your houses with a bank!"
Utility & Utilitarianism
Utilitarianism is an ethical system based upon maximising or minimising some utility function. A utility function, in this sense, is some operation on a set of information that produces a value. In other words, it is a function that makes statements like "state
A of the universe is less desirable than state
B". In conversations around human ethics, this utilitarian utility function is often vaguely expressed in terms of "well-being" or "happiness". On occasion, I've been called a utilitarian because of my discussions of heuristics, ethical systems as any systems which rank the future, and "maximizing" or "minimizing" things, but I'm not quite sure I understand what it means to be one.
It seems to me that all ethical systems contain utility functions, by their very nature of being information systems which rank future states of the universe. Someone driven by a religious ethical system might take their utility function from an ancient holy text. Someone driven by egoism centers their utility function entirely on their own pleasures and desires. A nihilist may choose to implement an entirely random utility function. All of these ethical choices are driven by some concept of utility. What then distinguishes these systems from the label of utilitarianism?
How Would Aliens Know A Dog Isn't A Robot?
An alien civilization is testing out their magical matter transporter, which is able to transport random sections of the universe into their lab. One day while testing, they press the button and a dog from Earth appears in the transporter in front of them. The aliens are shocked and overjoyed. This is by far the most interesting thing to ever come out of their transporter. But due to the random nature of the technology, they cannot keep transporting bits of the Earth to them to learn more. All they have is a 6-year old golden retriever named Charlie.
Occasionally, I read LessWrong. People on LessWrong often describe themselves as "rationalists". It's a little tricky to pin the term down beyond "users of LW", though many have tried. I won't try to define a community that I don't identify with and that isn't the point of this post. I wouldn't describe myself as a rationalist for reasons that should be obvious if you read more of my writing, though to be honest if you did ask me what intellectual labels I identify with I'd probably just say "dumbass". Rather, I'd like to examine what I think is the community's overuse of parable as a deceptively irrational rhetorical tool for explaining epistemic and empirical concepts.
A parable is a story meant to convey some sort of lesson to the reader. Many religions and schools of philosophical thought are packed with parables meant to convey moral or logical ideas. It is, quite evidently, a fantastic way to get people to think about a certain thing and to retain their interest. It's also, unfortunately, a deeply flawed one if our intention is truly for the reader to rationally and objectively evaluate our ideas.