I. Why Evaluate?

The first time someone asks you for the outcomes of your entrepreneurship center, your first reaction may be concern, fear or something in between. “Why do they want to know?” you might ask. “How am I going to get this done? I’m already too busy,” is another reasonable response. “What are their expectations? What does success look like?” may be other questions you have.

Regardless of your instincts, being able to tell your story means being able to document your results. If someone is asking the question, it probably means you haven’t yet figured out how to collect, analyze or communicate the impacts of the work you do in your entrepreneurship center. We recommend you consider this an opportunity to take care of an inherent program task: understanding and communicating the results of your work.

In the for-profit world, results are usually captured through an accounting system. Revenues, expenses and the bottom line show how the business is doing. Comparing these numbers to budget demonstrates a company’s progress against its plan. By projecting these numbers out into the future, a company can tell a story about its anticipated trajectory, potentially gaining investors or raising debt capital to help reach business goals.

In the nonprofit and/or government world, especially one like entrepreneurship with public policy goals, an accounting system can track and measure the organization’s financial sustainability, but rarely does it give information about how the organization is doing relative to its mission or goals.

So, the process of evaluating your program as described in this book allows you to answer the important question: Are we achieving our mission?

Different stakeholders want different things

All nonprofits and governments have stakeholders who support the program financially or as volunteers. These can be partners, contractors or community members who have stakes in the program’s mission.

Common stakeholders are state, regional or local government entities, such as economic development organizations. Other stakeholders may be legislators or city councils that have program oversight or provide funding. Universities and community colleges are common stakeholders for all types of entrepreneurship centers. Sometimes industry groups like Chambers of Commerce or trade associations are stakeholders, as are partners that serve small businesses like Small Business Development Centers or Manufacturing Extension Partnership Centers (in the United States).

Evaluation can be a challenge if a program’s stakeholders have different desired outcomes. This can occur if you have a variety of funding sources, for instance, with differing objectives. Or, it can occur if your program becomes politicized and desired outcomes suddenly change.

An important outcome of the evaluation process itself is to illuminate clearly the outcomes desired by various stakeholders, and to identify alignment and consistency among them. The evaluation process may also uncover inconsistencies in stakeholder expectations, which will help program staff manage the situation better.

Differences between evaluation, audit, benchmark, and return on investment

When stakeholders ask questions about a program, they often use imprecise language. Sometimes that is because they don’t know the subtle implications of different words. It’s important to understand exactly what stakeholders are asking, in order to best answer their questions.

For example, if they ask for an evaluation of your program, this suggests not only collecting data on outcomes, but also tying those outcomes to your activities – in other words proving causality. Is your program doing what it was designed to do? However, the stakeholders may not understand that this language implies a more intensive study in order to prove causality.

More often, stakeholders ask questions tied to efficiency and effectiveness. Efficiency means how much a program is accomplishing per resource. A common version of this question is “How much does it cost this program to create a single job?” Effectiveness, on the other hand, is related to evaluation – is your program doing what it set out to do—and implies the need to address causality.

A related question is “What is the return on investment?” Often stakeholders who ask this type of question are trying to compare various economic development programs to each other with the goal of eliminating the lowest performing ones. They define the return differently, as well. In some states, it’s defined as tax revenue returned to the state compared to the state’s appropriation. In other states, the quantitative results of an economic impact analysis are compared to the investment. Take care in these situations, as not all economic development programs have the same goals, and so comparing the efforts involved ignores broader issues, such as the importance of entrepreneurs in the ecosystem.

Some stakeholders will ask for an audit. This is quite a different question, relating more to financial management, and whether expenses are justified. In rare circumstances stakeholders may want to audit outcomes, ensuring that clients have indeed achieved what a program claims they have achieved.

Benchmarking is another variant of the comparison question. How is your program doing compared to others like it? This is an extremely difficult question to answer in many cases simply because the data do not exist nationally on all entrepreneurship centers. So, you cannot simply go to U.S. Census Bureau tables and see what outcomes other programs have achieved. Nor is it simple to ensure that you are comparing your program to similar programs. You can, however, benchmark your state or region or city to other similar areas in terms of demographics, and potentially the size of your entrepreneurial community.

Regardless of the question stakeholders ask, it is important that you ask for clarification, learn specifically what the questioner wants to know, and find out how they intend to use the answer.

Why do metrics matter?

Often people say, “Why do metrics matter? Isn’t that like driving a car looking through the rearview mirror?” Good question.

There are many reasons to use metrics to collect and analyze data in the management of your program. Three important reasons are as follows:

  1. Metrics will help you manage your work. Having data about your program activities and outcomes can help you figure out which of your actions produce the outcomes you desire and which do not. The for-profit corollary is knowing which products are profitable and which are not. Metrics can also help you measure the effectiveness of your staff and your partners, providing an important feedback loop to help you deliver better quality products and services.
  2. Metrics help you communicate with your stakeholders. Being able to demonstrate positive results can help document the value of a program, and thereby the importance of stakeholders’ continued support. The for-profit corollary is showing an income statement to a bank or investors, in order to document the path to achieving their objectives – getting paid back or having a successful exit.
  3. Metrics help justify requests for support. Outcome data are essential to proposal writing for grants, as well as making the case to potential sponsors and new stakeholders.

No one can guarantee that being able to provide data-driven answers to questions about an entrepreneurship center’s outcomes, will ensure its continued support, but it is more common for programs that lack performance data to be closed than it is for programs whose activities are well documented. The best example is the Ben Franklin Technology Partners program in Pennsylvania, which has produced regular reports1 about its outcomes for over thirty years and has retained significant financial support from the Commonwealth of Pennsylvania. It remains one of the most respected and successful entrepreneurial support programs in the country.

The Ben Franklin example illustrates another critical aspect of metrics. Programs should do evaluation on a regular basis, documenting results year after year, and showing a long-term track record of success. These types of longitudinal data sets are extremely powerful, since they allow organizations to show impacts of external events such as recessions and recoveries, rather than guessing whether a single year’s results are representative of all years. Those who view evaluation as a single event also miss out on the opportunity to use the data for internal program improvement, and to strengthen the support of existing stakeholders over time.

What will it cost? Is it worth it?

For entrepreneurship centers that watch every penny of their expenses, evaluation and metrics may seem like an unaffordable luxury. But given the downside of not having outcome data, that is a shortsighted approach.

The cost of evaluating an entrepreneurship center depends entirely on the program scope, necessary data and the evaluation approach. Luckily, with inexpensive survey tools available online, the cost of collecting data is dramatically lower now than it was even ten years ago; this used to be the most expensive part of the process.

If you design your program with evaluation in mind from the beginning, this work can be a seamless part of your everyday activities. When evaluation is an event, and you have to do the work all at once, it can be burdensome.

Is it worth it? What’s the value of having products and services that are consistently improved? What’s the value of having supportive stakeholders? What the value of being able to explain your program and its outcomes to potential supporters? When you look at it this way, evaluation and metrics are essential to successful program operation.

Interested in downloading the full copy of Metrics for Entrepreneurship Centers? Enter your favorite email address below!

Greedy For More Metrics Magic?

InBIA’s Metrics that Matter Course is a more hands on opportunity. This course is provided by “in-the-trenches” ESO managers just like you who have won federal funding awards where reporting strong, positive KPIs and organization success stories became a daily job requirement, and under a national spotlight – so we take the academic viewpoint out.

They have understood the struggle of reporting fledgling or lower than expected revenue or investment secured by clients; ensuring the next wave of operations funding from local government entities now run by a new group of administrators/politicians; and fighting the battle of standing out in your local community when new ESOs crop up.

Furthermore, this course enables ESOs to specifically address their individual and relevant concerns with ESOs similar to structure and scope, in dynamic, facilitated breakout session activities. Finally, attendees can expect some fun surprises in the main presentation of the course, making this lecture anything but boring (especially through the use of popular culture icons and hilarious real-life anecdotes!). Click here to enroll in this Speciality Course, Metrics That Matter.