EPFL AppliedMLdays summary

True Artificial Intelligence will Change Everything

Jürgen Schmidhuber, explained why: “True Artificial Intelligence will Change Everything” and why this technology is inevitable.

The current crop of state-of-the-art products (eg. google voice, google translate, Alexa’s voice) are built on a technology called Long Short Term Memory (LSTM). The clever thing about this technology is that you feed it very raw training data (eg. for voice you don’t have to sync the sound envelope with its phrase). and it can learn from this.

The LSTM software algorithms for this have been there for many (~10) years – we’ve just been waiting for today’s faster hardware.

Scientists think that the brain computes at 10^20 operations per second. Computers are doing 10^15 operations per second today. 1kg matter can in theory do 10 ^51. We will be able to build real artificial brains. What will come soon is small animal brain-like AI. Nature tool a long time to get there and very little more time percentage wise to get to human intelligence – it will probably be similar for machines.

The biggest LSTM network (google translate) is 1 billion nodes. There are 100,000 billion neurons in the human brain cortex – will have this in a machine in 25 years.

Google was a hack

Emmanuel Mogenet head of Google Research Europe

First 10 years of Google was a “hack”: information retrieval without understanding the meaning of the question or the answer.Then in 2010 came the knowledge graph (factual information about the world) with natural language querying – essentially recognising patterns in questions & converting them to database queries.

Natural language so hard because it has constant implicit references to the world we live in. Humans use it to have efficient human to human communication – eg: “will it be dark by the time I get home?” contains no entities: to answer it you need to understand the question which has obvious things to us but computers are completely blind to.

However learning the world by rote doesn’t scale. AI needs to solve “common sense”. If children can learn about the world why can’t computers? This probably implies building/using robots.

A human life is 10 billion pictures (10 frames a second over a life). We already have 100 billion images on the internet today so this should be enough to train “common sense”.

The Idea would be to learn a “world model” by counting occurrences. Eg. AI has to learn that cows and fields go together by counting occurrences of them together images.

The current state of machine/deep learning is that it needs labelled datasets. We don’t have these labels on the Internet images so researchers are trying to build unsupervised machine learning which avoids the need for labels. A major help is that data in the human world is hierarchical (overlapping hierarchies). They can start with high level concepts from human knowledge. Researchers are making lots of progress and think they should be able to solve this soon.

It involves a combination of Computer vision + Unsupervised Hierarchical ML + Common sense DB (logical scaffolding) + Natural Language understanding. Its a loop where the learning is fed back into the computer vision: Natural Language knowledge helps Image recognition.

He thinks we are 10 to 15 years away from all this.

Amazon Web Services
Amazon has an easy to deploy machine learning framework for AWS called MXnet https://aws.amazon.com/mxnet/

Sensors used to be the bottleneck.
Key tool: compression

Panel on ML and society
Ed Bugnon: Todays AI can do anything that a human can do in 1/10th of a second.
Eg. react to a situation driving a car, analyse a radiology image to recognise a broken bone. Data science requires data. Data is concentrated.

Emmanuel Mogenet (Google): There’s a lot of public data – today it’s more a question of infrastructure. Google plans to make it’s datasets available if you use our cloud.

Nuria Oliver, Vodafone: The open movement is that it’s better to bring the algorithms to the data.

AI replacing jobs?

Ed: Generation transition problem – Just like with the fall of the Iron curtain older people found it much harder to transform than the younger generation.

Em: ML will be an exoskeleton for the brain. Will it empower people rather than rend obsolete?

The gap between ML experts and general population is becoming big. Very difficult for society (government) to make decisions. Education is important. We need more informed conversation not sensational (apocalyptical) articles.

Swiss gov. recognises there’s massive change coming. Eg. Digital Switzerland initiative. EPFL has computation thinking at the core of it’s curriculum.
Need to educate people to understand what’s possible – not how to do things.

A quick show of hands from the audience showed ¾ positive on AI & society.

Algorithms and bias.
Nu: it’s a real problem – with complex data it’s very difficult to understand the bias in it. Corollary: humans are full of bias, selfish and make biased decisions.
Transparency of algorithms is a problem, accountability.

Ed: Computers analyse us as unique not as equal. Society isn’t ready to deal with this.
Em: Not worried about explanability – researchers will figure this out. Issue will be the legal framework (eg. Contesting an AI based decision). Humanities fields need to understand AI tech – Nu: Homo Deus is an example.

Em: change isn’t worrying – it’s the rate of change.
Machines don’t have creativity, intent and purpose.
Ed: learning how to interact with other human beings takes a lifetime.
The jobs we have today are an artefact of the limitations of our machines.
Nu: what does it mean to be human? – this will change over time.

Em: in 5 years – want to allow anyone with a spreadsheet press a button and get a predictive model

Generic AI
There were many presentations about implementing & hacking what I’d call “current generic AI software running on generic GPU based hardware”

The recipe is
1) get a dataset (eg. A set of categorised images or texts) to train the AI
2) try training AI frameworks with this dataset
3) use the resulting AI to recognise patterns etc.

The training step involves a lot of fiddling with parameters etc. BUT the next generation of AI should be able to train itself – I’m assuming this means that it will be able to figure out which are the best algorithms and parameters, so it will replace all this fiddling. Amazon is some way towards making this easier.

What’s missing is a real discussion about data in the real world. Eg. what’s needed take research and to build real-world products.

Thanks to Marcel Salathe and team for organising a great 2 day conference.

Update: videos and slides of most presentations of the Applied Machine Learning Days (AMLD) are now online

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s