Dr Hermann Hauser delivers machine learning lecture

26 October 2017

27 Oct 2017

Are machines better than humans? Using the metrics discussed by influential scientist, entrepreneur, and investor Dr Hermann Hauser at SCI’s Public Evening Lecture, it’s hard to say no.

For our two eyes, computer systems can be linked to hundreds of thousands of webcams seeing far beyond our visible spectrum; they’re already better than us at recognising people. Capturing and reacting to data from multiple viewpoints will allow driverless cars – the most significant near-term implementation of artificial intelligence in the real world – to make road travel safer and more efficient than human drivers could ever be. Our ears can decipher only a fraction of the wavelengths of beam forming microphones, which can locate and focus on the sounds they intend to interpret.

Hardest to supersede, of course, is the human brain. Dr Hauser, the co-founder of the ground-breaking computing company Acorn and later of Cambridge-based technology investors Amadeus Capital Partners, described how today’s transistors are 1,000 times smaller than the human brain’s equivalent, neurons – 20 nanometres compared with 20 microns – and computers can operate a million times faster than the brain, operating at the gigahertz frequency compared with the brain’s kilohertz.

There remain areas where the human brain comes out on top, though. ‘Humans still have the edge in terms of the number of neurons in the brain, which is about 100 billion. The largest chips we can produce have around 10 billion transistors,’ Dr Hauser explained. Likewise, the processing power of the brain per unit of energy far outperforms even the most powerful computer by a factor of 1,000.

‘The real secret of the brain, which we haven’t really understood yet, is that every neuron on average has 1,000–10,000 connections. And it is the usefulness of the connection of every neuron to 1,000–10,000 others that we’re now trying to understand with new machine learning.’

Dr Hauser explained that the greatest advantage of the brain is in this massive parallelism, where not only the computing elements are distributed throughout the brain, but memory too, and connectivity is fast – which is why we know immediately when we’ve stubbed a toe.

The third wave

Replicating this parallelism requires a new microprocessing architecture – only the third such occasion in history. The first was the change from CISC to RISC in the 1980s, which led Dr Hauser’s Acorn Computers to produce the ARM processor in 1985 – a processor architecture found in 95% of all smartphones today. The second was the graphics processing unit, which was needed to produce the pictures and videos displayed on the screens of a PC.

ARM microprocessorCredit: Uwe Hermann
Microprocessors must be linked in an architecture akin to the human brain.

‘Now we're a bit stuck with single processors, because Moore's Law has finished,’ Dr Hauser made plain. ‘We cannot make processors go faster than about 3 gigahertz because there's this little problem called the speed of light that we haven't solved yet. So, the only way we can improve the performance of computers nowadays is by parallelising it and adding lots and lots of processors together, like the brain.’

‘As it happens, machine learning works on very large datasets, so there is a natural way of dividing that dataset into lots of smaller bits of data and then dedicating processors to each of the smaller parts.’ Dr Hauser described a machine learning processor under development with 7,000 processors linked to one another, and each bearing 650 megabytes of RAM on-chip. ‘The reason why this is important – lots of processors and lots of ram – is because if you can do the computation on the chip itself, its 10-100 times faster than if you have to go off chip to find the data and bring it on-chip again.’

Abandoning determinism

Dr Hauser argued that today, the way computers perceive the world through binary judgements has ‘seduced us into thinking we can describe the world quite well by making statements that are either true or false.’ Instead, we should abandon determinism in favour of probability – instead of machines working only to rules set by programmers, they must use big data to learn and make their own rules based in probability.

‘The nice thing about things being true or false, is you can do these "if-then" statements. So, you could say "If all humans are mortal, Socrates is human, therefore the poor chap's going to die". This is very seductive – it makes a lot of sense, but it turns out it is better to describe it with probability.’ Probability-based understanding creates a far more complete and nuanced perception, Dr Hauser said; even the most evident ‘truth’ of our mortality is subject to the minute possibility that we may extend human lives indefinitely at some stage.

The genie problemCredit: JD Hancock
The genie problem: we must be careful what we wish for when we set the goals for AI.

‘What we gain [from machine learning] is that we don't have to programme any more. Most of the value of these machine learning environments come from teaching – providing the right training datasets to the computer, and the computer will then figure out itself from the training dataset what the sensible thing to do is. But it does need a lot of data. It needs big data.’

The difficult questions, Dr Hauser said, are of what a computer should do, and why. ‘There is a big conflict, because of course we'd like these artificially intelligent machines to do something that fits in with our lives – we want to give them human goals. But it is not clear how we do that. This might well be one of the biggest problems over the next few decades – I call it the genie problem, because we're actually very bad at wishing for the right thing.' 

SCI hosts a range of free-to-attend public evening lectures in London throughout the year, with experts in their fields delivering insightful talks on topics relating to science and industry.

By Simon Frost

Related links

Show me news from
All themes
from
All categories
by
All years
search by