Sniping at students to sell vouchers – Bursting the Bubble Sheet part 3

When many students show growth, it should be celebrated. If all students show growth, they should have an opportunity to be acknowledged as growing. So why does North Carolina use a contracted formula originally used in agriculture to plow public school student performance with misleading data?

The system is not designed to label all students as successful, even if they all grow as learners. When student academic growth scores are based on how students compare to each other (percentile rank), it sets up for a zero-sum game where in order for some students to be labeled as “growing,” others must be labeled as “shrinking.” 

By making “success” a limited label some students can only achieve at the expense of other students, and using this premise as the baseline for also evaluating teacher effectiveness and school letter grades, it’s no wonder private school voucher pushers love this data.

This data isn’t being used to support public school students; it’s being used to privatize education so adults can cash in on taxpayer money. Voucher supporters brag that these so-called “schools” can be started with only a fire and health inspection.

This “problematic” formula isn’t collecting data on private schools receiving publicly funded vouchers because North Carolina has the least regulated voucher program in the country. The data is designed to market vouchers, not to honestly assess student learning or knock private schools off their undeserved pedestal. 

Compare the history of public school funding:

with the future of private school funding:

Your tax dollars don’t just pay for vouchers, they pay for SAS generate misleading data on public school student, teacher, and school performance. Here’s a glimpse of how this is done with student growth scores.

Math with Mickey

To get you up to speed with the foundation of this rank-gain formula, let’s pretend 5 Mickeys from “Steamboat Willie” race each other in a 100-yard dash.  After the first race they finish in this order with these times:

While any of these finish times would be respectable for a student, a fixation on “place” would label our fifth-place student as a bad performance simply because that student finished last. Any additional context around that “last place” finish would be dismissed.

That would be like telling a student who earned a “C” that they’re not good enough because other students scored better.

Let’s assume the Mickeys race a second time and they finish in this order with these times:

In the second race, each student improves their finish time – they all grew in speed!

However, EVAAS growth conclusions are based on rank, not speed.

If you guessed Blue Mickey – you’re right.

Because Blue Mickey finished in second place which is an improvement on his first race’s 3rd place finish, this system would credit Blue Mickey with the highest growth.  He exceeded his “expected” growth by beating his previous 3rd place finish.

Based on a rank-gain system, which Mickey would receive a negative growth score? Look at the race results again:

Race 1

Race 2

If you guessed Green Mickey – you’re right.

Under this system, it doesn’t matter if Green Mickey ran 5% faster in his second race compared to his first race. What matters is that Green Mickey finished 3rd place when he was projected to finish in 2nd place based on past performance. By that standard, Green Mickey lost growth despite running almost 1 second faster.

As for the other 3 Mickeys, they would be labeled with a growth score of 0 because they finished both races in the same places. It wouldn’t matter that all 3 of them finished the second race faster because the growth formula focuses on the place one finishes, not the time.

EVAAS documents clarify that a growth score of 0 doesn’t actually mean a student didn’t gain more knowledge, however this detail gets lost in translation among policymakers who assume it means those kids labeled “0 growth” aren’t learning much.

When many students show improvement, it should be celebrated. If all students show growth, they should have an opportunity to be acknowledged as growing.

That’s not how EVAAS works.

Because the rank “curve” moves with average performance, when all students perform better it gets masked by a new set of goalposts. The image below is from pg 32 of the EVAAS explainer SAS created for the NC Department of Public Instruction. 

Instead of delivering this context in plain language, SAS documents are written by statisticians for statisticians with formulas like this:

This is quite effective in intimidating folks out of asking questions and bestowing trust in the mathematicians whose math they don’t understand.

Fortunately, science teacher Rhett Carlson reverse-engineered EVAAS a decade ago and had his findings confirmed by NC DPI official Tom Tomberlin in a meeting at that time.  Jason Wolfe was another teacher in that meeting and both of them shared their concerns with me to lift up concerns in the public domain when their previous efforts with DPI stalled.

As teachers, we’re trained to take complex material and translate it into digestible pieces to help others learn. That’s the goal of this “Bursting the Bubble Sheet” series.

What does this look like on student testing reports?

Our understanding of how EVAAS views our Mickey racers can be directly translated into NC student testing data.  Instead of “place” finished in a race, we call it student “percentile.” Instead of race speed, we call it “percentage” correct.

A percentile rank describes how a score compares to another score.  Imagine there are 100 students who all take a test and are then lined up from lowest to highest score.  A student in the 75th percentile is deemed to have scored better than 74 others in the line of 100 students.  A similar (though not perfect) way of thinking of this would be to think of the student as coming in 26th place.

The percentile rank is not the same as a percentage correct, nor is it an indicator of it. While we can infer which Mickeys were faster than others based on the order they crossed the finish line (percentile rank), the place they finished doesn’t tell us anything about whether their times were fast or slow (percentage correct).   

It’s possible that the student ranked in the 75th percentile answered questions with 50% accuracy, or 90% accuracy.  Percentile only communicates how a student performed compared to others, NOT whether or not that student performed well or poorly based on demonstrated knowledge of subject matter. (Norm-referencing vs criterion-referencing is the topic of a future post in this series – stay tuned).

Here’s a testing report sent home with my son. His percentile rank is shown at the bottom: 

If the next time he takes the test, he outperforms more than 72% of students (increase in rank above 72nd percentile) EVAAS would assign him a positive growth score. This change in rank doesn’t include the context of WHY his placement compared to peers changed.  Is it because he performed very well or is it because his peers slipped in performance?

If on his next test he again scores in the 72nd percentile (maintains rank), he will receive a growth score of 0. Remember – this doesn’t mean he didn’t grow his knowledge, it means he didn’t move in rank. 

If his next percentile rank was lower than 72, EVAAS would assign him a negative growth score. Fixating only on change in rank misses important context for WHY a student’s rank changed. 

Is it because my son performed worse than his percentile track record, or is it because his peers performed significantly better than their own previous percentile track records? 

Because the rank “curve” moves with average performance, when all students perform better it gets masked by a new set of goalposts.

This is neither parent nor student-friendly.

Why does this matter?

As a parent, it’s important to understand the ways the data can underrepresent your child’s academic knowledge and skills. This data evaluates them based on how they did compared to their peers – not by the extent to which they mastered the material. 

It’s grading on a curve that insists 25% of students will be labeled “not good enough” as it’s based on the formula used to cull dairy cows.

Remember, the same DPI official my colleagues met with a decade ago publicly admitted in 2022 that EVAAS inevitably labels a predetermined subset of teachers as inadequate despite their growth:

The data from this formula may cause undue anxiety about your child’s public education and prompt you to seek a private school voucher.

There are continued efforts to pay teachers based in part on their EVAAS scores and knowing the deck is rigged to label 25% as inadequate and only 25% as worthy of a “merit” bonus is important context to understanding the flaws in how “merit” is determined.

That’s bad for teacher retention and ensuring our children’s classrooms are led by experienced educators.

As school report card grade formulas go through a “reform” process that would possibly weigh growth scores more, it’s important to understand how these growth scores are created and why they’re problematic before doubling down on the stakes of growth data.

The data deck is stacked against public school students, teachers and schools to snipe at them and then used to market taxpayer dollars to fund private schools.

The next post, I will focus on how this data impacts whether or not students are labeled “Career and College Ready” and teachers are labeled “effective.”

You can subscribe to this blog by entering your email at the bottom of the home page or follow my Educated Policy page on Facebook, find me on X (Twitter) @educatedpolicy and I guess I’ll dust off my Instagram @kim.mackeync

Catch up on previous posts in this series:

Bursting the Bubble Sheet Part 2: Seeing the forest before the trees – educatEDpolicy

Leave a comment

Blog at WordPress.com.

Up ↑