Data infrastructure optimization software
Data integration and quality software
Data availability and security software
Cloud solutions

Expert Interview (Part 1): Not so Evil Mad Scientist, Paco Nathan on Biology-Informed Artificial Intelligence and the Nature of Feedback Loops

At the recent Strata Data conference in NYC, Paige Roberts of Syncsort had a moment to sit and speak with Paco Nathan of Derwen, Inc. In part one of this two-part interview, Roberts and Nathan discuss the origins, current state, and the future trends of artificial intelligence and neural networks.

Roberts:  Can you introduce yourself to our readers?

Nathan:  Certainly, my name is Paco Nathan and I have a background in machine learning. I was heading toward machine intelligence back before the “AI Winter” back in the ‘80’s. I had this really weird educational experience where I had done artificial intelligence and distributed systems, and I took an extra year in design school. Other, older, computer scientists at the time were telling me, “You have to have focus …”

Roberts: Pick something. Get really good at it.

Nathan: Yeah. “Design, artificial intelligence, and distributed systems have nothing to do with each other, and data analysis doesn’t fit with the AI side.”  I went to Bell Labs and got involved in neural networks, did some neural networks hardware Motorola in Austin for a while, then I jumped more into little companies. I spent about seven years doing neural networks, but we really didn’t have enough compute power back then in the 80’s and 90’s. I ended up doing a lot of network engineering and network security, so I taught network security a little bit at UT and at ACC. Also, I got involved with doing data analysis consulting, and I remember my first “data science” project involved solving a particularly tricky data mining problem in hardware compilers, which I ended up solving with 20 lines of Perl code. My boss was like, “Why are you laughing?” I solved it. Once I saw that, I though, hey, I think the data side of this is more…

Roberts: I could do this!  I know how this works. [laughing]

Yeah!

I didn’t realize but we have a lot of background in common. I got my first degree at ACC and then my second one at UT. And some of my grad student work was studying artificial intelligence and tutoring systems in the 90’s, back when we were talking about knowledge graphs and rules- based stuff. And, back in the day, I was also a Perl programmer. I think we all were for a while there. [Laughter]

Yeah. Python came in later.

So, I just listened to your talk which was really cool on “Human in the Loop.” A lot of the themes seem to be around the concept of: A chess program can beat the best chess player in the world, but what can beat either one of them is an average player with a really good chess program as a team. That concept of the human and machine team being superior to either one alone. What are the things that you would want somebody to take away from that presentation?

If we want to look at a hard thing to do, I use the term borrowed from Peter Norvig out of Google,  “Learning in uncertain domains.” That’s a pretty good definition for what we see with artificial intelligence right now. You say, you’re just going to build better and better models, we can use ensemble methods, and train, train, train with mountains of data, but you’re always fighting a diminishing returns curve. If you have human expertise handling edge cases working along with machine learning, augmenting the models, then you get the best of both worlds. You’re not fighting that diminishing returns problem. Instead, you can get past the 95% barrier that we’ve seen in so many domains.

The Rise of Artificial Intelligence for IT Operations - banner

Either with a human expert or a machine expert you get 95% accuracy, and that’s about as good as you can get. Put them together though, and you can go beyond. On the data engineering side of things, we’ve been pushing for so long to try and get people to be data-driven, that linear model, and now you’re talking about more of a loop.

Very much so. Yes.

What’s the difference between these two methods?

Coming out of big data, we came out of the era of doing a lot of batch work, Hadoop, and even Spark is mini-batch. Even Spark Streaming is still a sort of micro mini-batch, right? We came out of an era of a lot of batch work and a lot of BI, and really looking at “I’m going to run a job and I want to get some results.”

Input-output.

Yeah. That was classic in the early days of control theory, where we just looked at input and output. But a really interesting thing happened toward the end of World War Two where Norbert Wiener, the mathematician, had invited McCulloch and Pitts, who were biologists at MIT, and they set up the first formal study of neural networks. A lot of the original neural network research came out of that, jumping from control systems in biology. When you look at that kind of interface, biology is much more concerned…

It’s messy.

Yeah, it’s messy. There are no clear answers, there is no: Here’s your input; here’s your output. Organisms don’t work that way. They have to live all the time. So, there’s a lot of feedback always going on. Neural networks are, of course, inspired by biology. And there is a lot of information learned from biology that is influencing now where we’re going with AI. Now we’re seeing beyond deep learning, to things like imitation learning. Have you seen some of the Everybody Dance Now demos that came out recently?

Expert Interview (Part 1) - Not so Evil Mad Scientist, Paco Nathan on Biology-Informed Artificial Intelligence and the Nature of Feedback Loops - quote 1

This is new to me.

Oh, it’s fantastic. (Deepfakes for dancing: you can now use artificial intelligence to fake those dance moves) It was actually an undergraduate project at Berkeley. The lead on the paper, she’s an undergrad student. What they did was get people on video dancing badly, and then they’ll take a music video of really ornate dance choreography. It’s essentially an aspect of deep learning where you can train a model up to a point, and then do a transfer, peel off the top layers and retrain them for specific case.

So, let’s say that your model trained on a really professional dancer doing some complex choreography on a music video. You shoot some videos of your friends flailing around, run them through the system, and suddenly your friends are doing all the same dance moves as the expert, even if they’re not particularly good dancers in the first place. It’s a way of doing video imitation where I could take video of my body moving, and suddenly synthesize video of me moving differently or better.

You can also do this with speech. At the last AI conference, someone from Zino Ventures in China showed a video of the president of the United States speaking perfect Mandarin, using all the AI advances. It had a little “Made in China” stamp at the end of the video. And it was all done with type of conditional synthesis that came out of deep learning.

It’s almost like auto tuning a singer, only you’re auto tuning a dancer.

We’re seeing a lot of things informed by biology, and you’re right. Biology is messy. But I think that it fits a lot with what we do in business, particularly the large enterprise, because you get organisms at that scale. You have to think about really complex processes. You can never just stop, and simplify it as a flowchart. It’s never that simple.

Well, you were talking about the keynotes. One of the talks that I thought was particularly fascinating was, they had the paraplegic person, and they were walking by using their brain to operate a big robot-like prosthetic.

Yeah, a neural interface.

Yeah. And one of the side effects of using that was the person got a feedback loop that helped actually improve their capability of moving. People are an organism. We’re always getting feedback.

Exactly

Check back for part two where Roberts and Nathan speak about Agile and deep learning.

Check out our eBook on the Rise of Artificial Intelligence for IT Operations.

0 comments

Leave a Comment

Related Posts