It’s Not the Assessment – It’s How You Use It

Teachers for Teachers_RGB_150

Clare’s mother always used to tell her, “It’s not what you say that is the problem, it’s how you say it!” This same thought rings through our heads as we talk with teachers, administrators and coaches about assessment. Most of the time, the problem is not what assessments are being used; it is how these assessments are being used. Assessment is not an end, but rather a means to end. What is learned in the process of assessing students is every bit as important as the final outcome of the assessment. We acknowledge that there is so much that is out of our control right now when it comes to assessment, but we believe we need to also remember what we can control.  We have the power to make the most of what we have and use them to help us understand our readers.  Here are some ideas to help make the most of the assessments you are using.

An Assessment Is Not a List of Numbers

Required district or school assessments are often viewed as a list of numbers that need to be provided to a certain person by a certain date. The goal is to get them done, recorded, and delivered. Kids are then sorted by scores, and resources for extra services are allocated. The quantitative performance is emphasized, and often the only aspect of the assessment that is considered.

There is so much more information we can gather from these assessments. It is a shame we rarely take enough time to use them to inform our classroom instruction. We need to look at the actual assessments (not just a compiled chart of scores) and begin to analyze how the students performed both quantitatively and qualitatively. We look for patterns in a student’s performance and determine which strategies the child used while taking the assessment.

In a running record/DRA/Fountas and Pinnell type assessment, we analyze miscues, self-corrects and the quality of the accuracy score achieved by the student. For example, if a student receives an accuracy score of 99%, but had 13 self corrects and 10 rereads, we reconsider how “easy” that text really was for the student. It seems to us that the student had to do quite a bit of work to accurately decode the text, and we wonder if he/she could easily read this text level when working independently in the classroom.

We are also concerned with the amount of data being lost, when teachers and coaches do not take the time to analyze the miscues and self-corrects. This type of analysis gives us a window to understand how a student approaches texts and monitors for meaning. These assessments can help a teacher determine the type of small group and whole class instruction that needs to be done to support her readers in using strategies effectively and flexibly. This type of analysis is typically not required — only the list of levels needs to be turned in. Yet conducting this type of analysis is essential for making the best use of our students’ time and maximize the use of the assessment tool.

In a phonetic or phonemic type of assessment, such as the DIBELS, we look for how the student achieved the score he/she achieved. Some students perform tasks we ask of them very quickly, and this speed allows them to reach a benchmark score even though they may have been fairly inaccurate. Other students work slowly and meticulously and may have a higher percentage of accuracy overall, but do not reach a benchmark score due to their plodding pace. Some students reach a benchmark score by quickly giving the first sound of each word, while others reach a benchmark by giving the initial, medial, and ending sounds — same scores, but qualitatively very different readers. What about the student who can tell you all the letters in the alphabet when there is no time constraint, but then does not meet a benchmark score when asked to perform the same task with limited timed?

When we simply look at these students based on the numerical score they achieve on these assessments, we lose so much data. Knowing a student or group of students did not reach a benchmark helps us determine that these kids need support, but it does not tell us the type of support they need. The student examples listed above might require very different types of instruction to help students acquire the skills they need.

Required Assessments Count as a Conference

In several districts we are coaching teachers on how to use conference notes to plan whole-class, small-group, and individual lessons. Lately in these meetings, teachers have sheepishly admitted that they did not do any conferring in the last two weeks because they had to administer assessments. We emphatically reassure these teachers: “Assessing students counts as a conference!” Lucy Calkins describes the structure of a conference as research, decide, teach. When we spend time assessing our students, we are “researching” what they need to learn. When we take this information and use it to decide what our students need to learn and then organize that data to help us form whole-class, small-group and individual lessons, we are conducting an essential part of a conference. It may be a long conference, but it sets us up for lots of “teach.”

You can use the required assessments you are gathering in so many different ways in your day-to-day teaching. For example, once you complete DRAs or running records, you might take the time to build children’s independent reading bags with them right then. You have current information on their interests and reading levels – why not use it to match texts to readers? Many teachers are now conducting their DRAs/RRs near their library area so they can have easy access to books.

Another way to translate assessments into day-to-day teaching is to use your conferring notebook while you are doing your required assessments. This way, once you have completed and analyzed the assessments, you can jot down notes to help you focus your upcoming lessons. For example, if we notice a student is missing details in a retelling or is reading word by word, we would write those notes in our conferring notebook to guide future small group or individual lessons. It is so helpful to take a little the extra time after each assessment to think about what you learned, and how you can use that data tomorrow to lift the quality of your instruction.

Make the Most of What You Have

We are frequently asked which assessments we think are the best, and whether a district should switch from an assessment they are currently using to a new assessment that is being marketed. We could debate for hours over the pros and cons of each assessment. In the end, we believe that what is most important is that you can assess the full profile of a reader and you use the assessment data to inform your teaching. If a new assessment comes on the market, you have to be sure it is worth the money to purchase and the time it will require for teachers to learn how to use well. The critical question is this — will the new assessment truly give us information that our current assessment cannot give us? If the answer isn’t a resounding “yes!” then the new assessment is likely a waste of time and money. There will always be something out there that is new and advocated by colleagues. If we buy each new product that comes out, we will never take the time, often years, to have an entire staff master the use of assessments in place now. Sometimes it is better to stay the course with the tools we have and understand it is the best decision for our district at this point in time.

We have the opportunity and privilege to work with many schools using a range of different assessments. Our experience in these schools reinforces that there is not one “right way” to go about assessing our students. This work is messy and rarely precise. Yet we are convinced almost any assessment can be valuable for teachers if we triangulate it with multiple authentic data points and use the data to inform our instruction and monitor student learning.

 

Teachers Matter, Kids Count! Tammy and Clare

Teachers Matter, Kids Count!
Tammy and Clare

Leave a comment

Verification *