How do we solve new problems?
Motivation
Humans are extraordinarily good at adapting to new challenges. In a short walk, you might navigate a route you’ve never taken, avoid obstacles, and flexibly update your route to include a stop at the grocery store — all without appreciating how rare this skill is. Yet understanding how we adapt to new challenges, and why we sometimes struggle, has implications for how we educate, how we design, and how we develop artificial intelligence. Maybe most importantly, it can give us insight into how our own minds work.
You may appreciate this skill more when you push it to its limits — when you try something outside your comfort zone. Fluid intelligence tests are designed to do just that. They present novel, complicated, brain-teaser problems. You have to learn on the spot how the problems work, because they are different from anything you might do in a normal day. Incredibly, these abstract problems can predict a lot about real life, like your grades at school or your health later in life.
We think this is because many different mental challenges draw on the same brain network, called the ‘multiple-demand’ network. Neurons in this network adapt to encode whatever is relevant for you at a given moment. The network then connects to brain regions that are important for vision, hearing, and action, to drive our focus towards what is relevant. In other words, it supports attention. But how do we use attention to solve new problems?
One possibility is that we use attention to form moments of focus, or “attentional episodes”, around simple parts of a problem. Because we choose only a few things to focus on at a time, we can understand how they relate to each other, and gradually build up a solution to the whole problem. Researchers have tested this with classic fluid intelligence problems: they showed them to people either as one image, or split into three separate images to mimic viewers attending to each sub-part separately. Here’s an example — try to complete the pattern in problem A:
Now, try problem B:
You can see how much easier it becomes once the problem is broken into parts.
If this were the whole story, we could stop now. Unluckily, knowing that you need to solve a problem piece-by-piece does not give you the solution. The instructions for traditional fluid intelligence problems even point out that each problem has multiple parts — but people still find the problems hard to solve. They still need to figure out what the parts are and focus on each in turn.
We want to understand why knowing that complex problems are built from simple parts does not help us solve them. Do we struggle to guess which things we should treat as a ‘part’? Or do we struggle to focus on each part in turn? With this study, we hope to understand where extra support is useful when we are doing something mentally challenging — as well as inspire you to appreciate your adaptable brains.
Methods
We plan to differentiate two aspects of novel problem-solving: generating hypotheses (‘guessing’) and focusing attention (‘attending’). We will do this with stimuli modelled on fluid intelligence problems, like A and B above. We will present each stimulus in one of three ways:
- Integrated (like problem A)
- Segregated (like problem B)
- Interactive
We expect that the integrated condition will be difficult, and that the segregated condition will be easy, as researchers have demonstrated this before. The interactive condition is our key manipulation. Participants will be able to click on lines of the integrated problems to make them darker or lighter. So, they can alter the problems to look more like the segregated stimuli — if they can guess which lines should be highlighted together.
In this way, the interactive condition requires people to guess what lines form a part but makes it easy to attend to those lines to extract a solution. If this condition is easy, like the segregated condition, we know that attending and not guessing is the challenge in problem-solving. If the condition is hard, like the integrated condition, we know that guessing is the challenge. If it is in-between — reliably different to both the integrated and the segregated conditions — then we know that we cannot simplify the challenge of problem-solving down to one thing.
We will also collect fluid intelligence scores to make sure that our integrated problems are tapping into fluid ability.
Sample
From previous work, we estimate that people will solve ~25% more problems when they are segregated, compared to integrated. So, a difference of 10% between either of these conditions and the interactive condition could be meaningful. The effect size for a within-subjects mean difference of 10, with a hypothetical standard deviation of 30, is d=.24. Based on our power analysis (d=.24, power=80%, alpha=.5/3 for three t-tests), we plan to recruit 185 participants.
Cost
Participant payment costs will be budgeted at 45 minutes each, at £8.40 per hour including fees. We will pay participants for partial attempts in increments of 15 minutes, always rounding up. Based on our experience with Prolific, we expect a drop-out rate of approximately 20%, with most exiting in the first 15 minutes. We are requesting £1291.50 for 185 complete datasets (£6.30pp) and 40 incomplete datasets (£3.15pp).
Analyses
Our core analysis will compare the proportion correct in each condition using three planned contrasts: integrated-segregated, integrated-interactive, and segregated-interactive. We will consider an answer incorrect if any element is wrongly drawn. We will extract each participant’s average score for each condition, then contrast each pair of conditions with a two-sided t-test.
To validate our matrix problems and replicate previous work, we will also correlate performance on our task with fluid intelligence scores.
Open Data
Stimuli, stimulus presentation code (JavaScript), and analysis code (Python) will be available through GitHub. Data will be hosted by the MRC Cognition and Brain Sciences Unit and will be freely available on request.
Pre-registration DOI: 10.17605/OSF.IO/T7PHZ