Thursday, February 25, 2021

The Danger of Partial Data

Tuesday, February 23, 2021

 “a writer is someone who forces what's living in their head to pay rent” (@patricknathan on Twitter)


The following was in our staff newsletter last week:


D/F Data Updates - [Numbers of D's and F's broken down by grade--removed to protect any privacy issues that might exist.]


If you have a student earning a D or F, they should be invited to your Extended Learning Session every opportunity that exists for your class. 


Beyond the statement that “they should be invited to your Extended Learning Session every opportunity,” no commentary was included. And yet somehow this news item feels judgey. As a student of language and communication, I want to think about why that is. 


  1. Statistics about D’s and F’s have not previously been communicated to us--at least not that I can remember. Their inclusion in the letter implies...something. The assumption I leap to is that these numbers are (a) high and (b) unacceptable. They seem high and unacceptable to me, at least. 

  2. This information was not accompanied by any promise that teachers have been, are, or will be engaged in discussion about why these numbers are what they are.

  3. The only commentary, although couched in passive voice, was still about what teachers should do about this. 

  4. There’s a lot of data left out.  For example, it might be informative to include the following data:

    1. Attendance: is there overlap between who attends class in any form and those passing classes? (Shouldn’t there be? If kids can pass high school without showing up to class, is high school even worth it?)

    2. Attendance: of the students who *can* attend in person, how many actually ARE showing up in person? For those that have selected to be in-person but are attending remotely, what are their reasons?

    3. Attendance: when we invite students to extended learning periods, how many show up? (Note: I invited 12 students to my 1st extended learning period today, and one kid showed up. That one kid was NOT one of the 12 I invited but one who showed up to class earlier in the day and wanted some extra help on an assignment.) What “should” happen when we invite and they do not attend?

    4. Attendance: of the students who are marked “present” how many of them show evidence of actually being intellectually present? How many log in and walk away from their computers? How many would respond if called on in class, and how many would be not there and leave a big, gaping, awkward hole in the conversation?

    5. Attendance: How many of the students who are marked present arrived partway through class and/or left early?

    6. Effort: How many of those F’s were earned on work done in class during class hours when the student was doing nothing? How many of those were non-responsive when a teacher tried to assist?

    7. Non-school-related issues: How many of those F’s are associated with students who have lost a parent or loved one to COVID? How many are working to help out their families?

    8. Habits: How many of those students have not turned in any work for the last six weeks but fully intend to complete 18 weeks worth of work in the 18th week and expect teachers to grade 18 weeks of work in a matter of hours? How many did that last semester, last year, for the last six years? Have we created this problem? In trying to solve the problems of differentiation and standards based grading vs. traditional grades have we accidentally caused students to develop habits that will ultimately not serve them very well? 

    9. Trends: How different are these numbers from past years? How many students are represented by these numbers? Is one student failing five classes counted five times?

    10. Ownership: Can we have a conversation about how often in past years I would stand very close to a student and stare at him/her until he/she reluctantly started to write? Would I do that again if my student were coming to school? Of course. Should I? That’s another conversation altogether.

  5. Finally, the real knife buried in this data is the unvoiced implication that somehow teachers should solve this problem. But HOW? We’ve been solving problems five ways from Tuesday (is that the phrase?) for a year now. We’ve learned how to teach remotely and THEN how to teach when five kids are in the room and the rest are (supposedly) remote but with their cameras and microphones off. I’ve tried SO many new things. I’ve implemented every bit of teaching wisdom I know: nurturing low-pressure conversations and interactions, connecting to prior knowledge, linking everything six times so you can’t MISS getting to it, planning every lesson down to the tiniest detail so that it’s all ready to present in multiple modalities, reading the books out loud, providing graphic organizers for thinking, small group, large group, individual instruction, Edpuzzles, Padlets, Jamboards, social annotation, Kami, Peardeck, Poll Everywhere, collaborative slides, accepting late work, offering individual help…. I’m saying that if I knew what else to do, I would have already done it, but I’m already working harder with less feedback and less reward, and I just don’t know what else I can possibly do. You can show me evidence that I’m doing it wrong, but without helping me see how to do “it” right, without offering some sort of assistance, the judgement feels un helpful and frustrating at best.

  6. Is the same level of innovation, effort, support, grace, and trust that I’m daily extending towards my students, including the ones who log in and walk away, being shown to me? It doesn’t feel like it.



Thursday, February 25, 2021--More Data


In December, the week before Christmas, while we were fully remote and ending the semester, the district mandated that I give my students a survey about student engagement. “Well,” I thought, “this is bad timing! Still, I really do want to know what motivates my students and why they are and are not involved.” My colleagues were upset about the survey, but I thought, “Meh. I do want to know what kids are really doing behind their dark screens and when they don’t respond when I ask them questions out loud or in the chat.” Then I opened the survey. It wasn’t about the students, really. It wasn’t about how much sleep they get. It wasn’t about how often they have open other tabs and are doing other things. It wasn’t about if they have their phones in front of them during class. It wasn’t about if they are listening to music with lyrics while they try to read difficult texts. It wasn’t about if they have a quiet place to study, much less “attend” school. It wasn’t about if they are “going to school” from bed with the lights off (side note: I have since had several students tell me this is what they do.) It wasn’t about whether they complete work independently or even accept that responsibility as their own. It wasn’t about if they do work during class time when class time and support are provided. Nope. The survey asked questions like “When you feel like giving up on a difficult task, how likely is it that this teacher will make you keep trying?” (For the record every single one of my students who actually took the quiz said I would probably “make” them keep trying.) It turned out that this survey about student engagement was less about students and more about teachers. I saw why my colleagues were angry.


As I said, this was given mid-December, and today, more than two months later, we were given the results. We were told we weren’t required to look at them this time, but of course I knew I would. At first glance, they are pretty demoralizing. The survey opens up to a home page with a summary of results. My summary said that more than half of my students responded “positively” to questions about me having high expectations, my pedagogical effectiveness, and my student-teacher relationships. A dismal (to me) number responded “positively” to valuing the subject I teach and class engagement. Ugh. That smarts. But then you can dig in a little further. My results said I was below the school and district average in almost every category. (Also, I could see how I compared to social studies teachers which seems like an odd piece of data to offer me.) So...that’s depressing.


But here’s the thing: that data doesn’t include a lot of potentially telling information. Should I use this information to make my students see my subject as relevant and valuable? Of course! But what that data does not show is that I teach students who chose NOT to take the AP courses and, for the most part, the honors courses offered. This survey is taking a group of students who, before they arrived in my class, decided that English was not really where they wanted to spend their time and energy. Then they surveyed them the week before Christmas and while fully remote during a pandemic and asked how important English is to them and then, in February, sent me data saying that my students don’t value my class. Oh. So that stings. But...it’s also sort of...to be expected? Kids who chose not to take the challenging English classes don’t value English? Got it. Other teachers scored a lot higher than I did, but some of them who teach electives and AP start on second or maybe even third base. That’s not reflected in the data. 


You know what else is not reflected in the data? How many books the kids had in their homes growing up, a data point that’s been shown to be statistically significant. Also not reflected: are you a native English speaker? Some of my students are not. Is this true for AP teachers? Also not reflected: how many books have you read in the last five years? A student today told me he hasn’t read a whole book in many years. (For the record, this student also told me a couple of weeks ago that my class is his favorite--not because it’s rigorous, mind you.) Also not reflected in the data: how did my non-AP students rank their other classes? If the point I should take from this is that I should work to make my subject matter feel relevant to my students, then yes, I agree. But comparing my ratings to teachers who teach students who self-selected to take AP courses feels, well, really unfair.


Then I dug a little deeper into the questions. When a student said they were “somewhat interested” or “somewhat excited to go to class” or “somewhat eager to participate,” that was counted as a negative response. So if many of my students who self-selected to not take AP or honors classes said they were “somewhat” excited about my class, that was counted against me. What in the actual [bleep] are we doing to teachers here?


Finally, it appears to me that only 50 of my students actually took the survey. For the record, I teach more than 50 students.


I know for certain that the results of this survey demoralized more than just me. If I’m assuming positive intent, I can see that it was intended to help us reflect on what we are doing well and what we need to work on. Acknowledge. But here again, this information is being shared with us in a way that instead of spurring me to greatness makes me feel like I am, again, a miserable failure. I am less than my peers--who teach more motivated and confident students. My students feel negatively about the value of my class (except that I will “make” them keep trying and I will “make” them explain their answers. Oh, and 97% think I’m extremely knowledgeable about my subject, so there’s that) in comparison to my peers whose students selected to take a higher-level course or maybe even a non-required course. 


This summer, a friend posted on social media a scientific study and in their (staying gender neutral) summary commented that masks are only minimally effective in protecting you from the coronavirus. “Is this true?” I wondered. If it is true, what are we to do? So I waded through the whole study, which was filled with A LOT of numbers and terminology that I don’t encounter much. At the end of the study, the scientists commented that the study focused ONLY on the effect for the person wearing the mask and the fact that the protection level was small did NOT mean that masks are not effective at reducing community spread of the virus if everyone wears one, that other studies had shown that the way the masks are effective is not for the wearer but for the people around the wearer. If everyone wore a mask, everyone would be safer. I feel like teachers--at least at my school-- are being given only the misleading summaries. We are being shown only part of the data and, because we care an awful lot about what we do, we’re taking it personally. Indeed, if we were not meant to take it personally, what are we meant to do with it? You can show me where I need to improve, but if you show me where I fail to measure up to other teachers teaching different students, what can I do with that information but feel sad and ineffective and ponder what else I should maybe do with my life?


If teachers were to tell students that they were getting a failing grade because we were only “somewhat” excited to grade their essays and then told them how they compared unfavorably to all of their peers, even though we knew full well those peers were taking totally different classes, what would we expect to happen? Are teachers not also human? 


Data is good and useful, but partial data can be dangerous, insulting, and counterproductive.