In Search of the Logic of Artificial Intelligence
Prof. Catholijn Jonker, Professor of Interactive Intelligence in the faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) and board member of the Delft Design for Values Institute, is in search of the logic of artificial intelligence. A short interview.
“Research into machine learning in the sense of artificial neural net-works goes as far back as the 1960s. The computers in those times lacked the processing power to really get anywhere. It was not until years later that the Belgian Lernhout and Haspie came up with their speech recognition technology – and we thought we had natural language down to a fine art. The technology was excellent, but it still did not work as well as expected in practice. Now that risk is back. Since about two or three years, there’s this feeling that neural networks are the solution to every-thing; but things aren’t so simple in practice. The annoying thing about artificial intelligence is that it always gets ahead of itself. The ideas are bold, future-looking and innovative. But it takes extremely powerful computers to make them work. For a long time, that was the bottleneck. Not surprisingly, today’s great breakthroughs were set in motion by large companies with huge processing power, such as Google, IBM and Apple.
Universities lack the resources for such research. Intelligent Systems, my own department within EEMCS, deals with everything in this area of machine learning. We can advise on what works, and what doesn’t. What’s real, and what’s a hype? Reinforced learning systems are fantastic as long as you are wor-king in a limited context. The generalisation of these results is still going to take loads of research, and we are certainly doing our bit.
Take driverless cars, for instance. On the one hand, the world is hugely dynamic and complex: almost anything can happen. On the other, a car is relatively simple. You can brake, accelerate and steer away from obstacles – that’s not so complex. But interpreting your surroundings, determining what is heading towards you: that’s the crux of the matter. Not only must driverless cars know the rules of the road, but they must also apply them in a safe manner. The rules of the road are a finite set of rules, but the variety of situations that cars can encounter is endless. People have to get their driving licence before they are allowed to drive. Why should that be any different for a computer system? I haven’t heard of a single one of them getting its driving license, but they do drive around nevertheless.
The challenges for the time to come lie in combining machine learning with the ability to reason in what is known as ‘knowledge engineering’. These are reasoning-based systems with a logic orientation. If we get that connection made, we can also ask the system questions about why it took a certain decision. The system will then put forward bits of knowledge rather than saying ‘that was just the best pattern that I had learnt’. The aim is to frame the knowledge within reinforced learning systems in explicit terms, which is the do-main of knowledge engineering. The next breakthrough in artificial intelligence will be when we succeed in combining reinforced learning with logical systems.”
This article originally appeared in the October 2018 issue of Delft Outlook
W2R2bWVtYmVyIGlkPSIyNDExIiBncmlkc3R5bGU9ImZ1bGwiIG9mZnNldD0iMjAiIGl0ZW13aWR0aD0iMjUwIiBzaWRlPSJyaWdodCIgcm91bmRlZD0iIl0=
Catholijn Jonker is professor of Interactive Intelligence at Delft University of Technology. All her research lines adopt a value-sensitive approach to develop intelligent agents that can interact with their users in value-conflicting situations when also meta-values no longer solve the situation. In her vision agents and robots should explain their behaviour and decisions in a context-sensitive and engaging way.