class: left <link rel="preload" href="static/anim1.mp4" as="video" type="video/mp4"> <link rel="preload" href="static/anim2.mp4" as="video" type="video/mp4"> .title[ ## Preexisting Spatial Biases Influence the Encoding of Information in Visual Working Memory ] .subtitle[.single-line[ .pre[.bold[Colin Quirk] | Kirsten Adam | Edward Vogel] Virtual Working Memory Symposium June 4th, 2020 ]] .logo[ <img src="static/uchicago.jpg" width=300px/> ] .footer[.pre[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk | .sm-logo[<img src="static/github.png" width=20px/>] @ColinQuirk | colinquirk.com ]] --- .medium[.space[ Attentional control is known to drive much of the variance in visual working memory capacity (Adam, Mance, Fukuda, & Vogel, 2015; Fukuda & Vogel, 2011; Rouder et al., 2008). ]] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? Individual differences in capacity are often implicitly thought of in terms of how much one can maintain rather than how much one can encode despite evidence that much of the variance in individual performance can be explained by differences in attentional control. -- .medium[.space[ Strategy use is known to have a large impact on individual differences on span tasks (Dunlosky & Kane, 2007; Turley-Ames & Whitfield, 2003). ]] ??? We already know that strategy use can have a large impact on performance on span tasks. -- .medium[.space[ Comparatively less work has been done to understand VWM encoding strategies (Bengson & Luck, 2015; Brady & Tenenbaum, 2013; Pearson & Keogh, 2019). ]] ??? But despite these two facts, comparatively less work has examined strategical differences in visual working memory. --- .section[ How do people choose what items to prioritize when they are overloaded? ] .center[.cd-display[ <img src="static/cd_display.png" width=300px/> ]] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? And so the question that I want to examine today is a simple one: How do people choose what items to prioritize when they are overloaded? Many attention and working memory experiments present situations where all items are equally important and bottom up effects have been mostly controlled. All the data I'm going to be showing you today come from tasks that use displays like this. --- .mycenter[ <img src="static/cd_task.png" width=1000px/> ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? So here I am just showing a vanilla change detection task. Subjects are asked to remember as many of the colors as possible and are, after a short delay, tested on a random item. We know that due to the limited nature of visual working memory, you cannot remember all these items accurately, meaning you need to prioritize a subset of them. You can imagine a number of factors could drive selection in this situation, but often it is assumed that a simple random subset is selected. --- .mycenter[ <img src="static/cd_task_split.png" width=1000px/> ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? Here I'm going to show this assumption is flawed by examining the influence of target position. By splitting the display into quadrants, we can explore whether performance varies across target locations. The logic here is that differences across target locations can reveal which items were stored during encoding. --- .mycenter[ <img src="static/example.png" style="width: 1000px"> ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? Before I get into the data, I'd like to help you gain an intuition for the plots I'm going to be showing you throughout this talk. Here I'm showing accuracy on the y axis, split by the possible target quadrants. However, I've mean centered accuracy to make comparisons across subjects easier. Because the target location is decided randomly, a random selection model would predict no differences across quadrants. Here I'm showing a subject with 5% higher accuracy when a target is shown in the upper left compared to their own mean performance. This plot could represent a subject with 60% mean accuracy with 65% accuracy for the upper left quadrant or a subject with 90% accuracy with 95% accuracy in the top left. By mean centering accuracy, I've allowed us to focus in on differences across the quadrants without getting distracted by mean performance differences. --- .mycenter[ <video width="1000" autoplay muted="muted" style="margin:auto"> <source src="static/anim1.mp4" type="video/mp4"> </video> ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? It's important to note that, as the data is mean centered, the other points must necessarily balance out. So with that, I want to go ahead and show the basic effect we are going to be exploring today. --- .mycenter[ <img src="static/basic_effect.png" style="width: 1000px"> ] .footer[.pre[ n = 133 | Data from Xu\*, Adam\*, Fang, & Vogel, 2018 ]] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? We find a large amount of variance across the quadrants, at some points reaching effects near 20%. If any of you use change detection or a similar task you can probably look at your data and find this same effect. And it's really interesting to me as I think it's easy as working memory researchers to ignore the wide range of different strategies individuals have when completing our tasks. I think that this result is just a small look into this wide range of differences. --- .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] .mycenter[.large[ Is this a reliable result? ]] --- .mycenter[ <img src="static/30days.png" style="width: 500px"> ] .footer[.pre[ n = 74 | Data from Xu\*, Adam\*, Fang, & Vogel, 2018 ]] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? Next I'll show you some data from a study in which 74 participants completed 120 trials of change detection every day for 30 days for a total of over 3000 trials per subject. Thanks to the large number of trials, it will be easier to tell apart noise from genuine signal. First I'm going to show the variability that we would expect in this dataset if subjects are using a random selection strategy. We can simulate this by randomizing the quadrant information. --- .mycenter[ <img src="static/shuffled_exp2.png" style="width: 1000px"> ] .footer[.pre[ n = 74 | Data from Xu\*, Adam\*, Fang, & Vogel, 2018 ]] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? As you can see, under a random selection hypothesis, no participants show an effect even as high as 5%. Compare this to variability we actually observe. --- .mycenter[ <video width="1000" autoplay muted="muted" style="margin:auto"> <source src="static/anim2.mp4" type="video/mp4"> </video> ] .footer[.pre[ n = 74 | Data from Xu\*, Adam\*, Fang, & Vogel, 2018 ]] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? The real data reveals a much larger amount of variability across participants. Still, because we have a large number of trials performed across 30 days, we can check whether this variability is a reliable trait by performing a split half reliability test. --- .mycenter[ <img src="static/corr_by_day_blank.png" style="width: 600px"> ] .footer[.pre[ n = 74 | Data from Xu\*, Adam\*, Fang, & Vogel, 2018 ]] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? In a second, I'm going to show you the accuracy for even days plotted against odd days. So this isn't just a split half reliability test, it is also a test that a participant's bias is stable over multiple days. If we are observing random noise, we should see no relationship across the days. However, if we are observing a stable trait, we expect to see most of the points falling near the identity lines showing a strong correlation. --- .mycenter[ <img src="static/corr_by_day.png" style="width: 600px"> ] .footer[.pre[ n = 74 | Data from Xu\*, Adam\*, Fang, & Vogel, 2018 ]] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? As you can see, there is a strong relationship between performance across the even and odd days across the entire range of accuracy values. This suggests that the bias we observe is a reliable trait that is stable over many days. Before we conclude that all the variability we observed is due to an encoding bias, there is one additional potential source of variability that I'd like to rule out. --- .mycenter[ <img src="static/same_display_task_diff.png" style="width: 700px"> ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? Normally, when we run a change detection task we generate random displays for each trial, leading to differences across subjects. --- .mycenter[ <img src="static/same_display_task_same.png" style="width: 700px"> ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? Here, I'm going to show what happens when all subjects see the exact same sequence of random displays. --- .mycenter[ <img src="static/same_display_blank.png" style="width: 1000px"> ] .footer[ n = 283 ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? Given that we know the differences across quadrants are highly reliable, the variability we see must therefore be the result of individual differences. --- .mycenter[ <img src="static/same_display.png" style="width: 1000px"> ] .footer[ n = 283 ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? As you can see, using identical displays doesn't seem to have a large impact on the amount of variability observed. In fact, in this large sample we now see a nice distribution all the way out to the extreme values. These results further suggest this bias is a trait of the participant performing the task. --- .mycenter[ <img src="static/same_display_with_mean.png" style="width: 1000px"> ] .footer[ n = 283 ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? Interestingly, we do see an apparent main effect of quadrant. This main effect shouldn't be interpreted as a global effect, such that all people show better performance in the upper left as clearly that is not the case. Instead, you should interpret this effect as representative of the proportion of people who do well in a given quadrant. Here it seems as if we sampled more subjects who do better in the upper left and there may be an overall greater number of people who perform better in the upper left. While you could come up with explanations, such as reading, for the greater preference for a given quadrant, I'm personally more interested in understanding what is happening at the individual level. --- .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] .mycenter[.large[ When does this bias appear? ]] ??? Next, I'm going to ask when this bias appears and if we can manipulate it. --- .mycenter[ <img src="static/tl_display.png" width=300px/> ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? I'm going to show data from a visual search task in which subjects are required to find a target T and report the orientation. Instead of looking at accuracy, we will instead be analyzing reaction time. So here we are looking at a completely different task. But if the process of encoding information is similar, we would expect to see a similar range of variability and perhaps a similar main effect. --- .mycenter[ <img src="static/vs_lab_bias_blank.png" style="width: 1000px"> ] .footer[ n = 18 ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? Note that, because we looking at reaction time, better performance will result in points below the zero line, rather than above it as these points represent faster responses. So let's take a look at the visual search data. --- .mycenter[ <img src="static/vs_lab_bias.png" style="width: 1000px"> ] .footer[ n = 18 ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? As you can see, we still observe a large amount of variability across our participants. Based on the main effect, it seems as if most people are starting to search in the upper left before moving towards the lower right. While we don't have a direct connection to our results from earlier, it seems as if there is a similar pattern to our previous results. --- .mycenter[ <img src="static/tl_display.png" width=300px/> ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? The last dataset I want to show today makes just one change to this design. --- .mycenter[ <img src="static/tl_display_blue.png" width=300px/> ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? On 50% of trials, the target is a color singleton, leading to a popout effect. The idea here is to test if this bias is still present when we remove encoding limits from the task demands. If our bias is in fact due to sequential encoding strategy differences, this manipulation should remove most of the variability across individuals. --- .mycenter[ <img src="static/vs_prolific_bias_black.png" style="width: 1000px"> ] .footer[ n = 40 ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? Here I'm showing a replication of the previous effect with our new sample, again we see a large amount of variability when participants are forced to search sequentially. Now let's compare this to when the target is a color singleton. --- .mycenter[ <img src="static/vs_prolific_bias.png" style="width: 1000px"> ] .footer[ n = 40 ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] ??? With this small difference, most of the variability and most of the main effect across quadrants is gone. These results are consistent with our claim that our observed bias is due to differences in encoding strategy. --- .medium[.space[ Differences in encoding strategies can have a large impact on how individuals perform on specific change detection trials. ]] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] -- .medium[.space[ These differences reflect a stable individual trait, rather than noise due to random chance or display differences. ]] -- .medium[.space[ These biases may extend to any cognitive task where information is attended sequentially. ]] ??? With that I want to thank my collaborators, my lab, and all of you for listening and I'm happy to take any questions. --- --- .mycenter[.wr-display[ <img src="static/wr_display.png" width=300px/> ]] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] --- .mycenter[ <img src="static/exp4b_blank.png" style="width: 1000px"> ] .footer[ n = 113 ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] --- .mycenter[ <img src="static/exp4b_first.png" style="width: 1000px"> ] .footer[ n = 113 ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ] --- .mycenter[ <img src="static/exp4b.png" style="width: 1000px"> ] .footer[ n = 113 ] .footer-right[ .sm-logo[<img src="static/twitter.png" width=20px/>] @ColinTQuirk ]