Talking D&T

Holistic Evaluation: The True Measure of Design & Technology Capability

Dr Alison Hardy Episode 200

Send me a message.

Holistic, summative assessment is essential for properly evaluating design and technology capability, drawing on research by Richard Kimball and Kay Stables that predates the National Curriculum and has been tested across multiple countries and age groups.

  • Effective capability assessment tasks allow pupils to demonstrate ability to respond to an unfamiliar design contex
  • Assessment focuses on students' design capability, not their knowledge of the specific context
  • Teachers should provide resources for rapid immersion in the context without predetermined solutions
  • Real-time assessment is more authentic than extended assessments over weeks
  • "Unpickled portfolios" are raw and produced in real time, unlike "pickled portfolios" which become formulaic and focused on presentation
  • Successful capability assessment requires student ownership, allowing innovation and prototype testing
  • Holistic judgment approaches like comparative assessment provide reliable evaluation methods
  • Assessment should support curriculum aims rather than distort them


Resources and References from Episode 4: Holistic Evaluation: The True Measure of Design & Technology Capability

  1. Research by Richard Kimbell and Kay Stables on holistic, summative assessment of design and technology capability
    • Pre-national curriculum research from 1992 onwards
    • Tested across different countries and age groups
    • Research on comparative judgment as a reliable method of assessment
  2. STEM website where much of Kimbell and Stables' published work can be found
  3. Current research and evaluation study mentioned:
    • Dr. Hardy working with Kay Stables and Sarah Davis
    • Using Kimbell and Stables' work on assessing D&T capability as a pre and post intervention test
  4. Previous episodes in the assessment mini-series:
    • Episode 1: Overview of assessment in D&T
    • Episode 2: Assessing knowledge
    • Episode 3: Formative assessment, design crit and peer assessment

Acknowledgement:
Some of the supplementary content for this podcast episode was crafted with the assistance of Claude, an AI language model developed by Anthropic. While the core content is based on the actual conversation and my editorial direction, Claude helped in refining and structuring information to best serve listeners. This collaborative approach allows me to provide you with concise, informative, and engaging content to complement each episode.



Support the show

If you like the podcast, you can always buy me a coffee to say 'thanks!'

Please offer your feedback about the show or ideas for future episodes and topics by connecting with me on Threads @hardy_alison or by emailing me.

If you listen to the podcast on Apple Podcasts, please take a moment to rate and/or review the show.

If you want to support me by becoming a Patron click here.

If you are not able to support me financially, please consider leaving a review on Apple Podcasts or sharing a link to my work on social media. Thank you!

Alison Hardy:

you're listening to the talking dnt podcast. I'm dr allison hardy, a writer, researcher and advocate of design and technology education. In each episode I share views, news and opinions. Then in episode two I talked about assessing knowledge and then in episode three last week I talked about formative assessment methods, the design, create and peer assessment and the like. So this episode focuses on the holistic, summative assessment of design and technology capability. So you'll need to go back to one of the earlier episodes in the series if you're not familiar with the construct of design and technology capability. But it is something that can only be assessed really summatively. You can get snapshots of different aspects of it, but really it's a holistic, summative assessment and that's what the research says. And the predominant research comes from Richard Kimball and Kay Stables and their work on this has existed pre-national curriculum in 1992. So it's a very grounded set of research studies and they've tested this approach across a number of different countries as well as with different age groups, and you can find much of their published work on the STEM website and I'll put a link to that in the show notes I'm currently using. I'm not using it, sorry. I'm working with Kay and Sarah Davis and we are using that as a pre and post test. Their work on assessing D&T capability is a pre and post intervention test for a research and evaluation study we're doing at the moment. So I think this is a really grounded approach where there's lots of research, and so I'm just going to emphasize again this is about summative assessment of D&T capability. It's not about discrete outcomes. It's not about a finished product. It's about their performative capacity, what you are seeing and assessing when they are given a design challenge or a design context.

Alison Hardy:

Now I can hear some people saying isn't that the non-examined content in the UK for GCSEs? Isn't that coursework? Well, it kind of is and it's kind of not. So, as I'm going to talk about the research, you'll kind of see that some of the things that I'm going to talk about challenge actually some of the things that we have kind of become to accept as being the way things should be assessed in design and technology. So some people might find some of this a little bit uncomfortable, but hey-ho, there we go.

Alison Hardy:

That's what good research does, I think. So what it does is, first of all, the teacher. You need to design effective capability assessment tasks that allow the pupils to demonstrate ability to respond to an unfamiliar or new design context. Now what the research says from Richard and Kay is selecting or designing that design context, as the teacher, is a challenge, because you are not assessing their design context in terms of previously learnt experience of that context. For example, if you want them to design a kite, what they know about designing kites you're not knowing. You're assessing, not listening, about their previously learned knowledge about kites. You're assessing their previously learned knowledge about structures, about understanding users, about being able to join materials, about being able to generate design ideas, about ways of testing things and identifying what the parameters or specification is for the design, about being able to think about where they might need me to make compromise for the kite. So knowing about flight is kind of secondary, and so that's where designing the context that is, the packaging for the assessment let's put it like that way is really tricky because you're not assessing their knowledge of the context. You're assessing their design and technology capability. So that's the first thing that comes out in the research. So it's really key that you're giving the children opportunity to draw on previously learned knowledge. This doesn't matter what year group, what age group of children you're doing this with. This is not about doing summative assessment at GCSE okay, you know, post 14, post 16. This is about, you know, assessing their developing design and technology capability and Kay and Richard's work. They did assessments with all sorts of different age groups and when we were designing the one that we're using for the research, one of the biggest challenges myself, sarah and Kay had was identifying the design context.

Alison Hardy:

If you're going to set an effective capability assessment task, then you need to think about providing resources to help them with rapid immersion in the context. So, for example, if you're thinking about a kite, then you might think about, you know, going back to the previous episode and I said about being careful about giving them information, high quality examples that solve the problem. You don't want to give them kites, but you might want to give them things that are about flight or folding or celebration, because it might be that they want to design a kite. That's about a celebration. You might give them things to handle that might be about tension, about flexibility with fabric and holding things tense. So you're giving them resources to help with that rapid immersion into the context and you're trying to give them space to be innovative. So that's something that's really key so that you're not coming to this as the designer of the assessment task with a predetermined solution, then if you think about how you might manage this and you might start to have got the hint of how you might manage. This is currently in England.

Alison Hardy:

The non-examined assessment coursework happens over a lot of weeks, a lot of hours. Well, if what we're saying is that the design and technology capability is demonstrating that they're working and thinking like designers, then that's not in real time. An hour every week isn't in real time. So we need to think about how would we assess this in real time? So you might do it over a couple of weeks and we've been designing assessments that take an hour and a half to two hours. And again, richard and Kay worked with exam boards previously to do design challenges that were done in real time. One of my bugbears if we have extended assessments over several weeks, over several months, then actually that's using valuable learning time when you could be developing their design and technology capability. So the research says that we need to balance assessment requirements with teaching time and then thinking about how the children are presenting it.

Alison Hardy:

And again, kay, richard and others who worked at the Assessment Performance Unit, they really thought about how this was captured, and they talk about the pickled portfolio and the unpickled portfolio. The unpickled portfolio is raw and produced in real time. The pickled portfolios become formulaic, teacher-led and focus on presentation. We used to call it neat nonsense. I know there isn't quite so much of that anymore in summative assessment, but it does still happen and so that's where this planning of the curriculum is really key. So when you're designing these assessments that are in real time, you don't need to be formulaic, it doesn't need to be heavily structured, because you've designed the assessment for that moment of the children's development in design and technology, whether that when they're 8, 12, 14 or 16, or whenever it is, you know what they've learned before. You know what that's. That's what they're assessing, and you're assessing their growing design and technology capability, and it's about the content rather than the presentation.

Alison Hardy:

And so for a successful capability assessment, according according to the research is pupils need to be able to take ownership of the design task, and I can hear teachers going well, then they can't make it. Well, we do talk about in assessments, about prototyping. This isn't about a finished, highly crafted product. If we want to be doing that, then we go and do 3D design or we go and do a craft curriculum or qualification. This is about assessing their capability and so they need to have ownership, which might mean that they make decisions which lead to failure or some aspect of failure, because they need to be allowed to be innovative, because you'll have taught them strategies for being innovative and you want them to use those and you want to give them time to model and test their ideas, and that modeling can be quick and it could be prototyping in foam or card or fabric. It's not about it being perfect.

Alison Hardy:

So, thinking about those different aspects of designing a capability assessment task, it's holistic, so let's come on then. Finally, so I've kind of leapt around here, so I've got all sorts of notes and I'm going back now to think about. So let's think about the actual assessment, the actual grading. If we're saying it's holistic, then we can't break it down into itemised bits. To begin with, we have to think about holistic judgment and again, again, richard particularly has done quite a lot of work on comparative judgment as a very reliable method of assessing and doing that first. So basically, you know, comparing one people's work to another and ranking them and doing that with a group of people. That's a very simplistic way. There is an awful lot of much more rigorous literature than the way I'm talking about it to talk about this, but it's doing it holistically, so it's holistic, so it has to be in response to a design context.

Alison Hardy:

Thinking about the design context is really important, because you're not assessing their design context, you're assessing their, their ability to respond to that design context. You're actually assessing their design and technology capability. They're to draw on previously learned knowledge. So you've got to think about the design of that assessment. You've got to give them some resources that are relevant to the context to help them get to grips with it quickly. You need them to allow them to take ownership. It needs to be in real time. It's unpickled rather than pickled, it's not formulaic and it allows for innovative responses. That allows them to model and test their ideas. And finally, assessing it is better done through holistic and comparative to begin with, and then starting to think about itemising different aspects. With all of that and that's a real gabble through how might you revise your approach to capability assessment? How might you do this differently as a summative assessment, and what one change would make the biggest difference to assessment of summative assessment of capability in your setting. So that's the final episode.

Alison Hardy:

I feel like I've talked really quickly there and there's a huge amount in there, so there will be some stuff in the show notes. Four episodes there. Thinking about assessing knowledge, very little literature, formative assessment, a lot more, some really exciting things coming out of the Netherlands there that you can draw on Peer assessment, design crit, summative assessment, thinking about design and technology capability, assessing it in real time, and that assessment should support curriculum aims, not distort them. So making sure that your assessment tool is assessing what you think it's assessing. So that's it for this mini series. I've got a couple more episodes to do. That will finish off this about what the research says about design and technology. And, as ever, let me know about anything that you're doing to change your practice or that you think builds on the research I've talked about in this episode, or that your assessment tool is assessing what you think it's assessing. So that's it for this mini-series. I've got a couple more episodes to do. That will finish off this about what the research says about design and technology. And, as ever, let me know about anything that you're doing to change your practice or that you think builds on the research I've talked about in this episode.

Alison Hardy:

I'm Dr Alison Hardy and you've been listening to the Talking D&T podcast. If you enjoyed the podcast, then do subscribe, on whatever platform you use, and do consider leaving a review, as it does help others find the podcast. I do the podcast because I want to support the D&T community in developing their practice, so please do share the podcast with your D&T community. If you want to respond to something I've talked about or have an idea for a future episode, then either leave me a voice memo via Speakpipe or drop me an email. You can find details about me, the podcast and how to connect with me on my website, dralisonhardycom. Also, if you want to support the podcast financially, you can become a patron. Links to Speakpipe, patreon and my website are in the show notes. Thanks for listening.

People on this episode