Increased participation and a refined
evaluation process build stronger correlations
between good results and best practices.
Now in its second decade, the DiversityInc Top 50 competition continues to grow and evolve. We enjoyed a 19 percent increase in participants from last year, to 535 organizations, and we continued to reinvest in our project, spend- ing hundreds of thousands of dollars to upgrade our evaluation process.
Our editorial policies remain
consistent: Actual results determine the ranking, not business
conducted with DiversityInc.
There are three companies on our list that do no
business with us at all.
The application process
is free, and all applicants
that fill in enough data
will receive a free report
card. Please see www.
for a complete explanation of our methodology,
eligibility requirements and how to
We spent a full year improving
the SPSS computer programming
that contains our methodology and
produces the list. A four-person
internal team plus a seasoned SPSS
programming consultant worked
full time for most of last year on
this project, the free report cards
and our benchmarking product. My
goal was to increase our accuracy
in measurement between companies and industries—and to build
stronger correlations between good
results and best practices.
What makes our process successful is that the hundreds of
competitors give us a large-enough
database to make a relative assessment of the quality of diversity
management by actual outcome.
In other words, we don’t make the
standards; the field of competitors
determines them. This can be done
with statistical evaluation.
For example, we measure
four levels of manage-
ment. Inside each of those
levels, we measure the
percentage of standard
deviation of all results,
and from that, we deter-
mine what defines best
results for 50 companies
out of the full field of
competitors. We roll up the results
of hundreds of comparisons for a
point score in each of the four areas
we measure (CEO Commitment,
Human Capital, Corporate and
and Supplier Diversity). We also
test for consistency across all four
We are evolving in two main
directions. First, we want to contin-
ue to improve our accuracy. Second,
we want to increase the number
of competitors. For the 2012 list,
we will simplify our survey where
possible but continue to refine our
measurement. There are areas that
concern me, such as the lack of
improvement for women in the top
ranks, so we’re looking to mea-
sure competitive versus relatively
non-competitive positions and the
diversity difference between the
two. Our second area of focus will
result in a smaller survey for the
thousands of companies that are
critical to their local economy, but
not nationally or internationally.
We’re going to try and develop
several regional lists, cross-tabbed
by geography and industry.