Metrics, learning taxonomies, and web literacy

This year the Mozilla Foundation aims to ‘ship’ 10,000 contributors (read more about this on Adam Lofting’s blog). Much of that will come through people stepping up as mentors to help teach the web.

count all the things.jpg

What we need to help with this are metrics – discrete, measurable things (although not necessarily always numbers). These metrics will allow for the person that’s learning to teach the web to feel a sense of progression. And it will also allow us at Mozilla to see when that person’s reached a threshold to count towards our target number of contributors.

One of the things I’m interested in around all this is what it means to ‘get better’ at web literacy. I think there’s at least three things to bear in mind here:

  1. What counts as incremental improvement depends upon context (age, location, culture, etc.) For example, what counts as getting a bit better at something as a six year-old might be different from what counts as an adult (and, perhaps, vice-versa).
  2. People don’t tend to get better at things in a strictly linear way – as any parent or teacher will testify. Learning comes in fits and starts and tends to rely upon things like repetition and spaced learning.
  3. What counts as ‘getting better’ at one thing is different to ‘getting better’ at another. This may seem obvious, but it’s worth stating. It means, for example, that the metrics we use around the ‘Remixing’ competency of the Web Literacy Map should probably be quite different to those we use for the ‘Privacy’ competency’.

What it means to ‘get better’ at something is something that has taxed educators and researchers for a long time. To help with this, there exist a number of learning taxonomies. In fact, one probably popped into your head when you read the title of this post: Bloom’s Taxonomy. While there’s nothing inherently wrong with using Bloom’s (or updates to it), it feels a bit… stale. I’d like to try something different to underpin our work with Webmaker.

After a brief search I came across this document from University College Dublin (UCD). It’s an overview of several learning taxonomies, including the Structure of Observed Learning Outcomes (SOLO) taxonomy and Fink’s Taxonomy of Significant Learning (from Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses). While I’ve come across SOLO before, Fink’s taxonomy is new to me:

finks-taxonomy.jpg

I need to do some more reading, but I do like that the taxonomy is circular rather than linear. The description and ‘trigger’ verbs related to the taxonomy listed in the UCD document are also helpful:

finks-table.jpg

Ultimately, I think we’ll be looking at a combination of ‘human’ metrics (this peer thinks what you’ve done is awesome!) with more ‘machine’ metrics (the resource you created has been used/remixed X number of times). But it’s important that we get the thing that we’re mapping onto right rather than trying to retro-fit after the fact.

If you’ve got a learning taxonomy that has worked well for you, I’d love to hear about it. Please do get in touch – I’m @dajbelshaw on Twitter or you can [email me](mailto:%64%6Fug%40mo%7A%69%6Clafoundation.o%72g).

 
13
Kudos
 
13
Kudos

Now read this

Curriculum as algorithm

Way back in Episode 39 of Today In Digital Education, the podcast I record every week with Dai Barnes, we discussed the concept of ‘curriculum as algorithm’. If I remember correctly, it was Dai who introduced the idea. The first couple... Continue →