Robots Weekly 🤖: 3 Reasons Not to Fear The Robot Uprising

Posted by on Tuesday, February 20, 2018
robots weekly ai uprising

There is a lot of fear over the seemingly assured robot uprising that will replace humanity thanks to the recent advancements in AI. I don’t really buy into these fears, and thankfully I’ve found some really smart people that can back me up on it.

So let’s take a look at what these smartypants have to say and maybe we can all breathe a little easier. At least until this terrifying, backflipping robot is mass produced, because then we’re all doomed for sure.

Reason #1: Because AI & Mars are Basically the Same 🚀

Andrew Ng is a deep learning “celebrity” so his take is as solid as anyone’s. (“Wait, I thought you said AI, what is this deep learning nonsense?” Well, dear reader, as our friend Andrew says, deep learning is “one of the key processes in creating artificial intelligence”. So, same-same.)

Why isn’t he worried about AI spelling our doom?

The reason I say that I don’t worry about AI turning evil is the same reason I don’t worry about overpopulation on Mars. Hundreds of years from now I hope we’ve colonized Mars. But we’ve never set foot on the planet so how can we productively worry about this problem now?

Now, let’s not completely ignore the negative consequences AI could bring, but maybe let’s not assume we’ll be wiped out in favor of paperclips by a set of algorithms and code that gets confused if you change one pixel in an image or thinks “Love 2000 Hogsyea” is a Valentine’s Day message.

Reason #2: Because What is Feared is Already Here 😱

So this isn’t exactly uplifting, but it means that what is being pitched as AI’s terrifying feature isn’t unique to AI. Elon Musk’s example of a system designed to optimize strawberry harvest will kill off all humans to make “strawberry fields forever” is an example of the runaway objective function.

What is a runaway objective function? You can Google it, or just remember Disney’s Fantasia. Mickey puts on the magician’s hat (wizard? warlock? I don’t remember. Same-same) and tries to be the head mouse in charge but isn’t quite ready. Everything starts doing its own thing and the situation spirals out of control.

I think Tim O’Reilly’s most thought provoking example of a runaway objective function in action is our financial markets (a proto-AI according to him).

Reason #3: Because Our Fears Are Unfounded ⚱️

I’m a bit of a Kevin Kelly fanboy so this is going to turn into listicles inside of listicles inside of listicles. It’s listicles all the way down really.

According to our guide on this adventure through AI myth busting and fact building, there are 5 assumptions underlying the AI fears that have little evidence.

  1. AI is already smarter than us and will continue getting smarter at an exponential rate
  2. AI will become AGI (artificial general intelligence, a.k.a. good at/knows everything), which means it will be human-like
  3. We can recreate human intelligence in silicon
  4. Intelligence can be expanded without limit
  5. Exploding superintelligence can solve most of our problems

And now for 5 heresies (his word) that have more evidence and act as counterpoints to the above 5, with sub-points to elaborate on each.

  1. Intelligence is not a single dimension, so “smarter than human” is meaningless
    • Intelligence is varied and exists on a series of continuums. We will invent new modes of cognition and solve problems that were previously “unsolvable” (the latter is happening now) which will lead us to believe these new entities are “smarter” but really they’re just different.
  2. Human minds aren’t general purpose, AI won’t be either
    • There is a simple engineering maxim that not even AI will be immune to: you can’t optimize every dimension.
      • The classic business school adage “You can be good, fast, or cheap. Pick two.” illustrates this nicely.
  3. Cost will constrain emulation of human thinking in other media
    • An integral part of what makes human intelligence “human” is the hardware (a.k.a. our brains) so it will be hard to recreate this intelligence without recreating the hardware.
      • Interjection from me: cost will likely also constrain in that there will be a physical limit on hardware specs and performance of a given computational setup which will provide a hard max for performance of the algorithms and systems.
  4. Dimensions of intelligence aren’t infinite
    • All physical attributes are finite (your biceps can only get so big, bro) so reason is probably finite too.
    • New tech will not be “super-human” but “extra-human”, different than us and our experiences but not necessarily better. (sound similar to #1?)
  5. Intelligences are only one factor in progress
    • “Problems need far more than just intelligence to be solved.” (so, so true)

That concludes this week in nested lists. Thanks for joining us.

Check out more Robots Weeklies

Hey, like this post? Why not share it with a buddy?

2 thoughts on “Robots Weekly 🤖: 3 Reasons Not to Fear The Robot Uprising

  1. Roberto says:

    So….if I understand this correctly, we don’t have to worry about what’s worth worrying about today because it’s already happened and has been worried about, and we don’t need to worry about what’s worth worrying about tomorrow because others will have plenty of time to worry about it?

    1. Kyle says:

      Exactly! The problems either aren’t unique to AI so it’s fear mongering, or so far off in the future it’s hard to know if we’re worrying about the right thing. And at this point they’re just systems so all the problems are reflections of their human creators (coding in bias). The current systems don’t adapt well either. If the Go board had been one row bigger when AlphaGo won, it would have failed miserably because it wouldn’t have understood what was happening.

      So as that one famous fish says, “don’t worry, be happy.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Say Hello

Thanks for thinking of us for your project! We’d love to hear more about your needs and awesome ideas to see if we can help. Let us know what you’re thinking and a BI team member will contact you.

Charleston: 843.727.0310
Greenville: 864.651.0310

* Required Fields