Lore
Robots Weekly ?: A Little Groundwork
These Robots Weekly posts started by jumping into the deep end a bit. Well, not the deep end, but not the shallow end. The end of appropriate depth? I digress…
Let’s step back and hit some of the basics.
Machine Learning Explained ?️
This is a great look at the history of machine learning (it started with matchboxes in ’40s). The first machine learning setup (it wasn’t a computer) was designed to play tic-tac-toe. It shows that machines don’t learn in the way humans do; we go for 3 in a row, the machine picks moves based on the present board without understanding the concept of “3 in a row”. It is important to understand the parameters of the problem you are trying to solve, you need to run the program/algorithm/robot a whole bunch of times so it can learn, and you should vary the learning sets if possible. Or make them really, really large. You don’t want to train the machine on a faulty data set because then it will only learn faulty knowledge. For example, training it against a player that makes the optimal move every time with teach the machine to play to a draw because winning is impossible. You also need to determine how to give credit to different elements in a solution chain (like GA attribution models).
Takeaway: Use lots of good data and spend as much time as you possible can thinking through the problem up front so you can map the system properly.
Skynet is far off update: AlphaGo would have melted down had the board size they played on been anything other than 18×18
Challenges in Deep Learning ?
It ain’t all sunshine and rainbows, we’ve got some shiznit to figure out. A lot of the challenges raised seem to fall on the planning/people end, basically these systems are only as good as the people that program them. The biases, aversions, and misunderstanding of humans can be transferred to the machines through the coding and training of the algorithms.
Takeaway: You can’t take a terrible plan and great algorithm and make magic, at least not good magic
From Infinity to 8: Translating AI into real numbers ???
AI isn’t magic, it just seems like it. It depends on data in so make sure you have good data (good meaning useful, not necessarily quality). In the AI chicken-or-egg scenario, algorithms are the chickens, data is (are?) the eggs, and the results are bacon (because mmm….bacon). Also, data should follow the 4 V’s: volume, variety, velocity, veracity
Takeaway: know what AI does, it processes data. Bad data = Bad AI. It’s not a magic button, it just seems like it.
What Is Explainable AI? How Does It Affect Your Job? ⬛
Don’t believe the Skynet hype. Good overview of narrow vs. super intelligence in AI. Narrow intelligence is what we see most of now (AlphaGo, Siri, autopilot). Super Intelligence is the good at everything one, but is (probably) a long ways off. What we’re really scared of with AI (or what probably drives a lot of the fear mongering) is the mystery of what current AI systems do. You put in your data, run it in the AI black box, and get an answer, without any idea why that answer is right. Explainable AI is the concept of adding a model and interface to the system that would explain how the AI came to a certain conclusion.
Takeaway: Explainable AI provides context to the answers that are currently generated in a black box. This mystery is what drives many of the current fears around AI.