fotomonahauglid-5877-2.jpg

Dr. Inga Strümke

Researcher at the Norwegian Open AI Lab

  • Facebook
  • LinkedIn
  • Instagram
  • Black Twitter Icon

About Inga Strümke

Inga researched the use of machine learning in particle physics in her PhD, and has been active in the field ever since. Beneath her favourite question “How does the universe work?”, lies a layer of “How can we understand the world we live in?”, which speaks right to artificial intelligence (AI).

She argues that using AI for good is closely linked to both understanding AI well, but also knowing ourselves and our needs. Not only can AI help us drive cars and detect cancer – it can help us solve the optimisation problem of human ethics. If we play it right.

Inga is a researcher at the Norwegian Open AI Lab at the NTNU, and a part-time researcher at the Department of Holistic Systems at Simula Research Laboratory. She has previously worked on trustworthy AI and algorithm auditing at PwC, received the University of Bergen’s outreach price in 2019, and was awarded “One of Norways 50 leading women in tech 2020″. She is an international keynote speaker, podcast host at the Norwegian Council for Digital Ethics and has several video courses on artificial intelligence available, also in Spanish.

1. What is your full name and title? 

​Dr. Inga Strümke

2. What is your occupation? 

Researcher at the Norwegian Open AI Lab

3. Tell us about your first encounter with artificial intelligence (AI)? 

I wrote a poker player in LISP (one of the first 'AI programming languages') for a course at my university while I was actually studying theoretical physics.

4. What is your competence within the field of AI? 

I was once good at GOOFAI (good old fashioned AI), but have for the last decade focused more on data analysis, i.e. machine learning, and methods for explaining what machine learning models learn, meaning explainable AI, short XAI.

5. Why did you develop an interest in AI? 

Because to me, the most interesting thing about the universe is that it can be understood, meaning modeled and described. Thus far, we know of one species intelligent enough to model its workings sufficiently to solve problems, namely humans. AI is about to become the second. What now?

6. Can you recommend a relevant book or film about AI?

Stay away from movies if you want tto learn about AI :-) If you really want to learn about it, take a boring, old text book with more facts than speculation, for instance http://aima.cs.berkeley.edu 

7. Why should we, or should we not, be afraid of AI?

We should not be afraid of tools that we build ourselves - we should take responsibility. We should be afraid because we humans are very easy to manipulate, bad at expressing our goals unambiguously, and particularly bad at handling long-term consequences.

8. Which field, in your opinion, has the most to benefit from AI – and why?

Health care and automation. The former because medicine is a mature field with experience in handling difficult ethical questions and tradeoffs, of which current AI development is full to the brim. The latter because many tasks in automation are too dangerous or difficult for humans -  in my opinion, people shouldn't even drive cars - giving AI a relatively easy competition.

 

9. How should the use of AI develop in the future? 

Sustainably and responsibly. However, casting a mere glance at carbon emission for leisure, meat consumption, climate change, the mass destruction weapons' market, and so forth, this might be overly optimistic. 

 

10. Why should participants tune in on your presentation during AI+?

Because I promise to give everybody in the audience at least one new idea - and good ideas are hard to come by. 

Topic

Machine learning famously allows computers to find insights from data without being explicitly programmed. Machine learning techniques can even allow computers to find insights hidden from humans, resulting in superhuman performance. But how can we know which patterns such systems rely on, whether their assumptions are reasonable and if their performance is robust? Is it even possible to understand the internal knowledge of machine learning systems - to peek into the black box? Come and find out!