• Home
  • Blog
  • Program Evaluation and Measurement Tools Paper

Program Evaluation and Measurement Tools Paper

0 comments

Program Evaluation and Measurement tools-Paper Guidelines and Instructions

Readings:  See readings for unit 3

Assignment Description

The goal of this paper is to identify an evaluation plan for your project.  You need to identify which health outcomes you will measure.  How will you evaluate the effectiveness of your intervention plan?  Identify any measurement tools or evaluation strategies that may be useful in determining efficacy, efficiency and quality. Your project will most likely involve an effectiveness evaluation. You may want to create an evaluation plan that would facilitate tracking and trending. It is important to have clear goals on how you plan to evaluate the effectiveness of your proposed intervention.  It is equally important to measure parameters with tools that have established reliability and validity.  

All submissions should be submitted to the Assignment section in PDF format.  Your submission will automatically go the Turnitin.com.  You will see your score in the assignment section. A copy of the grading rubric is provided for you below.  It is highly recommended that you organize your paper based on the instructions and grading rubric criteria; the use of headings based on the assignment instruction is also strongly recommended.

When you write your paper, follow the grading guidelines. Each student is to submit a paper (no group work). The paper should be carefully written in a formal style, adhere to the most recent APA guidelines, use primary sources, provide an integration of ideas, and ­­­­­­­­­­­4-6 pages in length, not including the USA approved title page, appendix, and reference list.

Assignment Instructions & Criteria. See Grading rubric in assignments for point value.

  

Criteria

 

1. Introduction   paragraph

  • There must   be a thesis statement that tells reader purpose of paper and what   will be discussed.

 

2. How will you evaluate the effectiveness of your   intervention plan? 

  • Identify   and report the health outcomes you will measure.
  • Identify   any measurement tools or evaluation strategies that may be useful in

     determining efficacy, efficiency and quality.

 

3.   Identify short-term, intermediate, and long-term outcomes from your   intervention plan.

  • List and   identify how do you plan to measure short-term, intermediate and long-term   outcomes?
  • Describe   how you will identify and collect data for formative and summative   evaluations.

 

4.   Describe any potential tools that you may use to collect your data.    Identify their reliability and validity scores.  You can attach a copy   of your selected tools as an appendix in the paper.

  • Identify   tools (i.e. questionnaires) that will help you obtain the needed information   that you desire.
  • What tools   will you use to measure if your health promotion-prevention intervention had   an effect?

 

6.    Conclusions: summarize the essential   points of paper (no more than one paragraph).

 

7.   Appendix

  • Include a   copy of the tool or tools you plan to use.
  • List any   physiologic measures you plan to collect.

The Goals of Evaluation

The generic goal of most evaluations is to provide “useful feedback” to a variety of audiences including sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived as “useful” if it aids in decision-making. But the relationship between an evaluation and its impact is not a simple one — studies that seem critical sometimes fail to influence short-term decisions, and studies that initially seem to have no influence can have a delayed impact when more congenial conditions arise. Despite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically-driven feedback.

From The Goals of Evaluation at http://www.socialresearchmethods.net/kb/intreval.php (Links to an external site.)

The generic goal of most evaluations is to provide “useful feedback” to a variety of audiences including sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived as “useful” if it aids in decision-making. But the relationship between an evaluation and its impact is not a simple one — studies that seem critical sometimes fail to influence short-term decisions, and studies that initially seem to have no influence can have a delayed impact when more congenial conditions arise. Despite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically-driven feedback.

There are many different types of evaluations depending on the object being evaluated and the purpose of the evaluation. Perhaps the most important basic distinction in evaluation types is that between formative and summative evaluation. Formative evaluations strengthen or improve the object being evaluated — they help form it by examining the delivery of the program or technology, the quality of its implementation, and the assessment of the organizational context, personnel, procedures, inputs, and so on. Summative evaluations, in contrast, examine the effects or outcomes of some object — they summarize it by describing what happens subsequent to delivery of the program or technology; assessing whether the object can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object.

Formative evaluation includes several evaluation types:

  • needs assessment determines who needs the program, how great the need is, and what might work to meet the need
  • evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness
  • structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes
  • implementation evaluation monitors the fidelity of the program or technology delivery
  • process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures

Summative evaluation can also be subdivided:

  • outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes
  • impact evaluation is broader and assesses the overall or net effects — intended or unintended — of the program or technology as a whole
  • cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values
  • secondary analysis reexamines existing data to address new questions or use methods not previously employed
  • meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question

Evaluation Questions and Methods

Evaluators ask many different kinds of questions and use a variety of methods to address them. These are considered within the framework of formative and summative evaluation as presented above.

In formative research the major questions and methodologies are:

What is the definition and scope of the problem or issue, or what’s the question?

Formulating and conceptualizing methods might be used including brainstorming, focus groups, nominal group techniques, Delphi methods, brainwriting, stakeholder analysis, synectics, lateral thinking, input-output analysis, and concept mapping

Where is the problem and how big or serious is it?

The most common method used here is “needs assessment” which can include: analysis of existing data sources, and the use of sample surveys, interviews of constituent populations, qualitative research, expert testimony, and focus groups.

How should the program or technology be delivered to address the problem?

Some of the methods already listed apply here, as do detailing methodologies like simulation techniques, or multivariate methods like multiattribute utility theory or exploratory causal modeling; decision-making methods; and project planning and implementation methods like flow charting, PERT/CPM, and project scheduling.

How well is the program or technology delivered?

Qualitative and quantitative monitoring techniques, the use of management information systems, and implementation assessment would be appropriate methodologies here.

The questions and methods addressed under summative evaluation include:

What type of evaluation is feasible?

Evaluability assessment can be used here, as well as standard approaches for selecting an appropriate evaluation design.

What was the effectiveness of the program or technology?

One would choose from observational and correlational methods for demonstrating whether desired effects occurred, and quasi-experimental and experimental designs for determining whether observed effects can reasonably be attributed to the intervention and not to other sources.

What is the net impact of the program?

Econometric methods for assessing cost effectiveness and cost/benefits would apply here, along with qualitative methods that enable us to summarize the full range of intended and unintended impacts.

Clearly, this introduction is not meant to be exhaustive. Each of these methods, and the many not mentioned, is supported by an extensive methodological research literature. 

About the Author

Follow me


{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}