Lore

Robots Weekly ?: Dangerous Words ☠️

You may have heard recently about some fancy new AI text generation system that is “too dangerous to release”. That model was created by OpenAI and sports a catchy name: GPT-2. There was a LOT of digital ink spilled about it following the release, this post is my attempt to summarize it.

What happened?

OpenAI went full hype on a quasi-release of a new, “state-of-the-art” language model that can generate surprisingly coherent text.

Why the air quotes?

Because we don’t actually know it it’s state of the art. They didn’t release the actual model, just stripped down versions of it. So no other researchers or parties can try to replicate the results. They didn’t open source their data set either (but the internet fixed that).

Wait, what does “surprisingly coherent” mean?

It means it knows all about South American Four Horned Unicorns!!!

Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

That text is (a portion) of what the model generated in response to a two sentence prompt provided by the researchers. I believe it was also one of many outputs generated and cherry picked for its quality. (This isn’t necessarily a bad thing, just clarifying that it isn’t blowing the doors off on the first try.)

Is this a big deal?

Yes. And no.

Yes because most of AI’s success and growth to this point is due in part to the fact that a ton of stuff is open sourced and shared. Time isn’t spent recreating others’ closed systems, but testing them out and iterating to create new benchmarks.

No because not everything is open sourced all the time. And plenty of other people are working on models like this one. (Also because someone could probably come pretty close to recreating this based on what was released and the open sourced dataset linked to above.)

So, why does this seem like a big deal?

First, because the group behind it is called OpenAI and this makes for great ClosedAI jokes and the like.

Second, because of the way they rolled it out. There was a coordinated PR approach for the announcement that doesn’t accompany most model releases and benchmark announcements (unless you’re DeepMind). OpenAI is also positioning this as a conversation starter for how the AI community should handle the release of potentially dangerous models and technologies.

And the new third reason is that OpenAI announced they will be creating a for-profit arm of the organization to support their research and the continuing non-profit piece. They’ve termed the new approach “capped profit” as it sets a cap on the returns investors and employees can reap.

What it all really boils down to is that one of the most visible organizations spearheading AI research, in an open format, appears to be starting 2019 with a massive course shift compared to their early mission.

Next time I’ll dive into the various reasons supporting both sides of the “to release or not to release” divide in more detail.


Read more Robots Weekly