Saying Goodbye to the Coverfly Score

By April 7, 2021 April 10th, 2021 Uncategorized

When we launched the Coverfly Writer Portal almost 4 years ago, we knew we wanted to create a top rated chart for the most-awarded projects on our platform to help elevate writers. We built a dynamically-updated live ranking of projects, The Red List, based on all of the placements, scores, and information we had on a project. We called it the “Coverfly Score.”

The dynamic top rated chart has been a resounding success. It’s given writers something to achieve and celebrate, and, more importantly, built heat around writers that’s given them career traction. Today we’re generating a new “writer success” nearly every day where a writer is discovered by an industry exec or rep off a Coverfly list or through a Coverfly initiative.

But the underlying score these rankings are based on, known as the “Coverfly Score,” hasn’t had as smooth a ride. Some of its flaws were gradually exposed as it struggled to adapt to the growing needs of our industry and writer audience.

I can summarize the problems with the Coverfly Score as:

  1. It doesn’t reflect both a project’s quality and relevancy (or “heat”) simultaneously.
  2. It fails to react to new information or data points the way one would expect it to.
  3. It doesn’t convey enough information.

Problem #1: Reflecting Quality and Relevancy

In terms of quality, the Coverfly Score has proven difficult to “game” - that is, projects with very high scores are almost definitely of high quality. There aren’t false-positives, which is great! But the score does a poor job of highlighting projects that don’t yet have a lot of data. It misses out on quality projects that are new, or only have a few recent placements.

And in terms of relevancy, or “heat,” it totally misses the boat. The Coverfly Score, by design, doesn’t go down. That means projects retain their high scores as time passes and remain at the top of The Red List years into their existence. As the industry has increasingly adopted our platform for talent discovery, we have hundreds of execs scouring our lists every day, and many of them are looking for new, hot projects--they’re not always as interested in ones that have been collecting accolades for a few years.

Problem #2: Reacting to new information the way one would expect it to

Since day one, we’ve struggled to explain the Coverfly Score to writers simply and in alignment with their expectations. That’s because the Coverfly Score calculation is quite complicated - its variables include scores, placements, competition ratings, historical reader bias, number of scorecards for the given project, and more, into every calculation. We believe all of this information can be relevant to helping find scripts the industry will be excited about, but it’s difficult to boil down into a simple algorithm.

A complex or opaque algorithm wouldn’t be an issue in and of itself if its result was predictable, or at least within the realm of what we’d expect, but the amount of time our customer support team spends answering the question “Why didn’t my score go up?” is evidence that something isn’t right. 

Why is it so unpredictable? If your script becomes a Semifinalist in Nicholl, your score should go up, right? Well, not if the previous 9 scorecards we have on your project were great and raised its score already. But try explaining that to a disappointed writer who wants to celebrate every success--and rightfully so. Since Coverfly Scores can’t go down, we designed the algorithm to be really careful about letting them go up. That means your score might jump on a placement you think is “low-value,” but stay the same on a placement that you think is “high-value.”

Problem #3: Conveying Information

If you tell a producer that your project has a 640 Coverfly Score, it conveys almost no information about the quality of your script. Even if the producer is familiar with Coverfly Scores, they don’t know how hard it is to earn a 640, or how many projects are above a 640. What if 90% of the projects on Coverfly have a Coverfly Score over 640? That 640 isn’t so great anymore. 

By the way, in reality, a 640 Coverfly Score is insanely high, and represents the top 0.001% of our database. See? Presenting it that way conveys much more information. Oftentimes sharing the percentile of a score, rather than the score itself, is a much better indicator of the value of the score. “This project is in the 10th percentile of 40,000 projects on Coverfly” is much more powerful and understandable than “This project has a Coverfly Score of 520.” You’ll notice a lot of other popular scoring systems around you are relational in this sense. Rotten Tomatoes doesn’t use a raw number as its score - it uses percentiles of critics’ ratings. IMDB’s star meter is based on percentiles. That way, if internet traffic quadruples across the board next year, that kid in your improv class doesn’t have a star meter higher than Anne Hathaway’s in 2012. Most standardized tests convey the test-taker’s performance as a percentile-based score. To receive a 700 on the GMAT (a test for admission to business school), you need to perform better on the test than 90% of the other test-takers. No one cares about the percent of questions you answered correctly, though. Relational data is much more informative.

The most visited FAQ on our support page is “What’s a good Coverfly Score?” My answer has always been “a score higher than the next person's.” The true value of a Coverfly Score is that it places you higher on a list than someone else, and that leads to additional exposure on Coverfly.

The Solution

At the end of April 2021, we’re retiring the Coverfly Score in favor of a percentile-based ranking system with a new underlying metric. This underlying metric will have the following characteristics:

  1. The rank will always go up when a project receives a new placement.
  2. More recent placements will receive a value bonus that will diminish over time, but will still retain some value, even years later.
  3. The value of a placement will take into account the amount of submissions selected for that placement by the program, as well as the quality of submissions submitted to that program.
  4. After a certain number of top placements for a project, the value of additional top placements for that project will start to count less.
  5. Writers will have access to their percentile of this metric, but the metric itself will not be shared with anyone (including writers).

This solves a few problems:

  1. By reducing the “relevancy” value of a placement over time, projects with more recent placements will have an advantage and rise to the top of our top rated charts. This will surface more writers and give our industry members exposure to more timely projects and writers, which ultimately translates into more opportunities for more writers.
  2. By accounting for quality and recency in a placement, projects with only a few top placements per year will rank high on the charts.
  3. By recording your highest rank on the charts with our new badges, you'll be able to more easily switch focus to a newer project without feeling like you're abandoning a score or a hard-won rank on the top rated list.
  4. Because the underlying metric will move more predictably (and rise when a new placement is added), so will the project’s percentile ranking. We want movements in a project’s rankings to make sense.
  5. By focusing on the percentile/ranking instead of an arbitrary number, we’re able to better convey information about a project’s relevancy, which in turn helps industry members looking for great scripts.

In addition, we’ll be introducing the concept of Badges to writers’ profiles. Inevitably, there will be projects that rise to the top of the charts, but over timeas the placements that got them there age and diminish in valuefall off of the top rated chart. The writers of those projects should have something to show for their months or years of hard work, which is why they’ll receive a “Top 5” badge on their project page, for example, along with the date the badge was achieved.

Good for Writers; Good for the Industry

Within the next couple of years, we expect the majority of new paid, working writers in Hollywood to have been discovered through Coverfly or a Coverfly-qualifying program. Being a professional writer shouldn’t be dependent on who you know or whether or not you can afford to make the move to LA to start looking for work. We believe that, as much as possible, your chances of breaking in should be based on how good your writing is. Our goals are lofty, and in order to hit them we have to adapt quickly and attempt radical strategies. The new ranking system will help us better highlight projects and writers for our growing industry base; they’re hungry for fresh perspectives, and we know they’ll find them on Coverfly. We can’t wait to see the incredible writers who enter the industry through our pipeline in the coming years.

April 7th, 2021 // By Scot Lawrie, Co-founder