Acronyms Create Entropy

rick-profile-pic_origI wanted to be a normal CEO, farmer, coder, programmer and just couldn’t figure out how to fit in. I have a thing, it is not good or bad, but I have an issue in that I’ve not read much. I don’t like reading or writing and that is exactly why I am here. This is my penance for being dyslexic and partly illiterate. I write code all year, but very few words. I enjoy the logic of making machines do stuff; though, I prefer making almost any change to my natural world. This is why I have a tractor, to move stuff. Some days moving a pile of rocks, trash or manure (farmers move shit and like it) which can bring more joy than having done something in a digital space.

I don’t spend much time thinking about thermonuclear war. I  live near one of the oldest oil refinery in the western US. An attack on it would vaporize my seven acres of dirt and rocks. My neighbors and I would be those folks that just never see world war III. I do spend some of my daily cognitive load moving buckets of cyber security related ingredients.  I’ve spent some time thinking about cyber war and I’m particularly tired of cyber security being linked with warfare in the west’s narrative. Though I realize if we do ever have physical conflict in the CONUS cyber is gonna effect lots of folks both digitally and kinetically.

It is this kinetic effect that I believe that we need to recognize is growing. Cash isn’t used for basic goods anymore. Thinking that networks must be maintained so that folks can eat, transport goods or themselves is just where I think we have made one mistake. In my world I only have a thin strand of glass that connects my world with yours — it is this perspective that I hope to bring here.

AI is consuming a significant portion of the headlines memes and investment today. Understanding that new humans will be working alongside AI will not make all humans comfortable. Could Internet Security professionals prevent AI from becoming the aggressor, or should we be leveraging AI in the military? Both these questions require us to rely on some really smart people to understand the risks. The really smart folks mostly work for really rich people and they tend to care about shareholders not you.

Wealthy and powerful people are afraid of AI that has the trait of compassion. We definitely shouldn’t write software that allows a machine to think about human feelings or have any concern for individual well being. A machine with just the most basic sense of compassion, would it suggest you purchase new shoes? Would it find ways to maximize your happiness and longevity? Can we hire one, this is my new business plan. I just want to build machines that love all the money from your wallet, so I can go and do some good. This is known as the Gates plan. I’ll build really broken, poorly secured software, get rich then I’ll go help fix humans with all the money I got.

I’m ready for a machine that can detect fraud so that when a Bank’s CEO does it, that person goes to jail. Or a machine that identifies when you are being influenced and just removes that portion of the content. Much of the business models in Silicon Valley rely on being able to influence large portions of society to make unhealthy decisions.  The Newyorker (via David Grabner) provides us with a deep dive into why some jobs are BullShit(™) see https://www.newyorker.com/books/under-review/the-bullshit-job-boom

BullShit ™ Jobs are going away, but most of the new jobs will be for machines —  you don’t own a machine. One of the machines that is gonna make money, if you are not creating them, you won’t make money. Only the folks that create the machines that make money are gonna have money. You will have money, paper money, but it won’t be worth bits.

When an Uber car hit a pedestrian in Utah the first piece of video released was video. Lots of folks related to it, how could they have seen the lady with the bike? The car wasn’t allowed to stop itself, but the human in it didn’t know that. The Economist [2] explains why it hit the pedestrian and that it had enough information in its LIDAR feed to stop it self, though none of this was communicated to the human in the car, nor the public at large.

In the south (where I grew up) on dirt roads that were washed in an daily afternoon rain, the deer whistle saved lives. Back then if you were driving home late on friday night, this was before we had DUIs, but I’m dating myself. The deer whistle makes a sound audible to wildlife, but not humans. It was the deer in the headlights that would make you swerve and the dirt, clay and mud ensured that your car did flips into the woods. This usually killed the driver and passengers. A deer whistle affixed to the hood of one’s car would alert all the wildlife TGTFO2TW (to get the fuck, out of the way) Alerting humans that an autonomous vehicle is nearby might be something that would save lives.

I believe that the rich and powerful, the folks that are funding all these startups, are the ones worried most about AI becoming conscious. An AI that had my dog’s capability for empathy could be destructive to those in power. An AI that had the most basic human traits might be dangerous to governments and the politics of our world leaders. Certainly never let an AI make a digital form of money. Well, we have some of the prerequisite for a post-apocalyptic AI that makes digital money and levers it to help those in need. Thankfully no one is working on that code. It could be a long way off before we have an understanding how an AI might even reapportion assets in society to meet basic needs. The Chinese think they can do this from centrally planning social credit and the West believes in a perfectly informed market.

Humans create the rift between those is in control of the vast quantities of data organized against a society as it assimilates a digital nervous system. From just a few years of watching folks forgo exercise, food and rest to click on stuff, I’ve come to conclude the apocalypse is here, you just can’t see it through your screen yet.

Merry Christmas!

[2] https://www.economist.com/the-economist-explains/2018/05/29/why-ubers-self-driving-car-killed-a-pedestrian



Categories: Cyberr

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: