(One of my summaries of a talk at the 2017 PyCon.de conference).
He’s a physics professor and started out with particle physics. The big experiments like in CERN. Big experiments that also generated Big Data. Think terabyte per second. This was long before the term “Big Data” was invented.
Lots of data, you have to filter out the noise and find the actual signal. There’s a fine balance there: if you are too careful, you’ll never discover anything. If you’re too enthousiastic, you can get wrong results. Wrong results are bad for your physics career, so the methods used were quite conservative. He had to fight to get more modern methods like neural networks accepted.
What is intelligence? Two definitions:
The ability to achieve complex goals.
Ability to acquire and apply knowledge and skills.
And artificial intelligence?
All intelligence that is not biological.
Biology, ok, what is life? “A process that retains its pcomplexity and replicates. DNA is abog 1.2GB. This is the physical life.
Your brain is about 100TB. This is the “software”. Cultural live. It accelerates through teaching, books, technology.
Technological life? That will be when it can design its own hardware and software. He guesses robots/computers will be more intelligent than humans in about 50 years (“the singularity”). He also assumes we’ll improve our own bodies. Improved memory and so. Whether it will be a good thing, he doesn’t know. But it will happen.
What makes machines intelligent? There are two main branches in AI. brute force silicon and intelligent algorithms. The first is Moore’s law in combination with deep neural networks. The second is domain knowledge applied to a field in combination with machine learning. After a while, machines start getting better than humans (for instance google image recognition).
Another example is the computer that beats Go champions. No one can beat it anymore. And the current version learned only by playing against itself…
Superhuman narrow AI (=”accomplish complex goals”) is already achieved
Audio translation (text-to-speach).
Mental games (chess, jeopardy, go)
Atari video games(!). Just by doing it. (“Reinforcement learning)
Elementary particle physics research (event reconstruction)
Retail business management (supply chain, pricing). Pricing is handled by his company ‘Blue Yonder’. AI is better at pricing products than humans!
Decisions. Many important personal and professional decisions are done by gut feeling: they won’t be automized. “Do I want to marry her?”.
Operation decisions can be automized. This are decisions that are repeated often. Placing orders to replenisch stock. Setting prices. You have to do it often.
How to we currently make such decisions?
Almost always you do nothing. Or you do the same as yesterday or last year. Often, this is bad. There are so many bad decisions being made each day…
Follow business rules. This is already better.
Really thinking about it is rare.
How should you ideally do it? You need data. And a bit of a prediction of the future. Figure out the cost and the utility. Then optimalization. Then automation. “Predictive analysis”.
Predictive analysis isn’t always useful. There’s a spectrum. On the one side purely random processes (lottery numbers). On the side laws of nature. The middle, that’s where predictive analysis comes in.
Predictive analysis as he does it results in a probability distribution, not in a single predicted number. A probability distribution is much better. But the shop owner needs to know how many to order. You can start with utility and cost functions and optimize based on them.
For one supermarket, the out-of-stock rate was 7.5% originally. When they started to follow the AI’s predictions (but with human adjustments), it dropped to 5%. When they decided to do it fully automatic? Below 1%!
But: how nice is it to notice that a computer is better at it than you with your 20 years of supermarket experience???
He showed a couple more nice examples.
He calls it vertical end-to-end AI solutions. Specialized providers combine expertise and experience in solving complicated business problems by narrow AI. More or less an “intelligence layer” on top of existing ERP systems. Make people work smarter.
Back to physics. And to python. He himself originally had to use fortran and punch cards. In high energy physics, they once moved from fortran to c++. This cost a lot of wasted man years. Now many people are moving to python. With numpy, pandas, dask, it is absolutely good enough. It is readable, it is fast, you are productive. He loves it. ‘Dask’ will be very important for the future of python.
Now. AI. Do we want it? He thinks it will be for the good.
Photo explanation: some 1:87 scale figures on my model railway (under construction).
My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.
Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):