Route Two Soccer: Projecting the NWSL Season


Projecting performance is difficult, even for highly qualified people. For those of us who don’t have decades of experience under our belts, it’s even harder. To clarify that point, I want to talk today about a simple but powerful idea which has helped guide conversations in baseball, and then apply it to a soccer context.

The idea is this: what if you designed a projection system so simple that a monkey could use it? At the time when this was first discussed, Friends was still on the air, so the guy who came up with it called his system a ‘Marcel Projection.’

The way it works is: you take the last three years of performance. Combine them together, but weight the most recent year most heavily, the middle year less heavily, and the most distant year the least. Then divide by your denominator, and that’s your projection. Depending on what you’re projecting, you might want to add some small other tweaks to normalize the data, but that’s really just about all there is to it.

The system is designed to project individual player performance, which is relatively easy to do in baseball (which is filled with quantifiable statistics). But for soccer purposes, where such stats are less available (and less relevant) I want to pull the camera out wider to look at an even more basic unit: the team.

To that end, here’s a Marcel projection for the 2019 NWSL standings:

North Carolina Courage48452124
Portland Thorns43382315
Chicago Red Stars3733276
Seattle Reign3633267
Orlando Pride313333-1
Utah Royals312325-2
Houston Dash273037-7
Washington Spirit212336-13
Sky Blue212946-17

Producing this projection took about three minutes of work. I entered the results from 2016-2018* into an Excel sheet, weighted the seasons by a 5/4/3 ratio, and then generated a result.**

My ‘projection’ knows almost nothing about these teams. It doesn’t know who the coach is, which players had breakout seasons, what trades were made, who is coming back from injury. It doesn’t know style of play. It doesn’t know that it’s a World Cup year. All it knows is the bare results from the last three years.

And yet, I would wager that this projection ends up being pretty close to accurate. In fact, it will probably beat the projections from a lot of very intelligent people, who know far more about all those issues I just listed.

That’s because human beings are absolutely full of unquestioned biases, of all sorts. We overrate some players, while underrating others. We overstate the importance of some events while failing to properly include others. And there is the classic problem of punditry: it’s fun to predict change and boring to predict continuation of the status quo.

Now, I certainly don’t mean to suggest that there is no value in expert analysis. I only want to lay a marker for how to judge assessments. Because in the current women’s soccer ecosystem, there’s almost no accountability. Pundits are free to make predictions, but not only is there no one checking back to see what they got right and wrong, there isn’t even a structure for measuring success.

So things like a simple Marcel system are useful, if only because they generate baselines against which people can measure themselves. It may not be exciting to predict that everything will more-or-less remain the same. But it does have the virtue of generally being true. And that’s something that everyone involved in this business can use a reminder of now and again.

I’ll put together my own real projection once the season gets closer. When I do so, I’ll certainly think about all the little details of player movement and development. I’ll look at the schedule. I’ll consider how teams will deal with losing their national team talent for the World Cup. And I’ll try my level-best to produce something that is accurate.

But, to be honest, there’s every chance that the dumb Marcel from this column will end up being more accurate than my clever prediction to come.

* For the purposes of this exercise, I’m treating Western New York/North Carolina and FCKC/Utah as continuous teams, despite the name and venue changes.

** Eagle-eyed readers will notice that the goals, goals allowed, and goal difference don’t quite add up. That’s partly due to rounding errors (e.g. Orlando are actually projected to score 32.5 goals and concede 33.25, which is a GD of -1, even though both are listed as 33), but is mostly because the data includes two seasons of the Boston Breakers, who conceded rather a lot of goals. If you want, you could control for that and subtract one goal per every 24 that a team is expected to score.

Follow us on Twitter