Research Tools & Tips
Here are some tools and tips you can use in sampling, collecting information, response rates and, analysing data, etc.
The Community Toolbox provides over 6,000 pages of practical skill-building information on over 250 different topics. It was developed by the Work Group on Health Promotion and Community Development at the University of Kansas in Lawrence, Kansas - and is an incredibly useful collection of practical tools and tips for anyone involved in community change and development. One chapter of the Toolbox is devoted to Assessing Community Needs and Resources and Chapters 36, 37, 38 and 39 cover aspects of Evaluating Programmes and Initiatives. It also includes
The Evaluation Centre, hosted by Western Michigan University, includes an excellent collection of Evaluations Checklists (more than 30 at last count) covering evaluation models, management, values and criteria and much more.
The Online Evaluation Resource Library (OERL) was developed for professionals seeking to design, conduct, document, or review project evaluations, and is funded by the National Science Foundation in USA. It includes professional development modules on choosing a methodology, sampling, designing and implementing questionnaires, designing and implementing interviews. It also includes, components, quality criteria, glossaries and sample evalution plans, instruments and reports. While it has an education focus, its many resources can usefully be applied more generally.
The Multimedia in Manufacturing Education Lab at Georgia Institute of Technology has developed some useful on-line evaluation tools: an Evaluation Matrix, Anecdotal Record Form, Expert Review Checklist, Focus Group Protocol, Formative Review Log, Implementation Log, Interview Protocol, Questionnaire, User Interface Rating Form and a Sample Evaluation Report. Although designed for evaluating multimedia education, many of the tools can be readily applied to other situations.
A 'sample' is a part of the whole or total - one person out of a group of people, a piece of music out of a whole song, etc. In statistics, sampling is considered either a 'random' or a 'nonprobability' selection. A nonprobability sample means that there is the possibility of bias in the selection, and thus it may not be representative of the whole (called the 'population'). This usually means we cannot generalise from the sample studied.
Generally you need to have a 'list' or equivalent of the total population in order to generate a random sample. This is called the 'sample frame.' However, an exception is an approach called Hypernetwork Sampling, which has been used effectively to generate random samples of organisations - including nonprofits, congregations and workplaces (see Section 2. "Sample Design") when no complete list is readily available.
Gene Shackman has collected a number of useful further guides to sampling (including the place for 'snowball' sampling, sampling public records, sampling communities and households as well as individuals and much more).
Sometimes we make the mistake of thinking that research = doing a survey. While a survey is one method of collecting information, there are many others - which may be more appropriate for what you need to find out.
The University of Wisconsin Extension Programme Development and Evaluation has one of its "Evaluation Quick Tips" (No.8) on different Methods of Collecting Information and (No.11) Sources of Evaluation Information.
They also have specific "Evaluation Quick Tips" on
In the specific field of surveys, Ronald Polland has developed for the Adolescent Pregnancy Prevention Grant, Duval County Health Department, The Essentials of Survey Research and Analysis: A Workbook for Community Researchers. It is a comprehensive and practical guide in 13 Lessons, covering topics from why use surveys, through constructing the questionnaire items, and coding data, to reporting the results.
One survey software company has produced a very readable online tutorial on questionnaires and survey design. Another company offers templates of online surveys you can use (start with the "Community - Government Surveys" category when searching their library).
On focus groups, the University of Wisconsin Extension Programme Development and Evaluation has a virtual workshop on running a focus group - including pre-reading, a workbook and Powerpoint (R) presentation.
They have also prepared some useful Evaluation Instruments for working with community groups.
There is more information on different information collection methods in some of the resources listed above, and under Evaluation and Research Methods [don't know how to make a link to this section of the Guide].
Your 'response rate' is the number of people who answered your questions divided by the number of people you contacted. The University of Wisconsin Extension Programme Development and Evaluation has one of its "Evaluation Quick Tips" (No.1) on How to get a Respectable Response Rate and another (No.2) on What You Should Do If You Haven't Gotten a Respectable Response Rate.
Cleaning your Data
This includes checking for incomplete and implausable data, removing duplicates and idenifying missing data. The University of Wisconsin Extension Programme Development and Evaluation has one of its "Evaluation Quick Tips" (No.22) on Making Certain Your Electronic Data are Accurate.
Coding your Responses
'Coding' generally refers to how you classify and as a result summarise the responses you get to your questions. It applies to written and verbal responses and also to observations.
Closed questions (where people have to choose between set answers or multiple choices) effectively have your coding system already built in.
The University of Wisconsin Extension Programme Development and Evaluation has one of its "Evaluation Quick Tips" (No.20) on Ten Steps to Make Sense of Answers to Open-ended Questions.
Validity and Reliability
Validity refers to whether something is well-grounded or logically correct. The most basic test is face vailidity - which refers to whether, for example, questions look as if they should measure what they purport to measure. A measure can be invalid in a number of ways; a systematic or persistent tendency to make errors in the same direction is considered a bias. The best way to improve validity is to attempt to measure the same thing from different perspectives or in different ways.
Reliability, on the other hand refers to the 'stability of results' - that is if we do the survey again or if someone else does it, will it get the same results (that is, are the results replicable). Thus a measure may be reliable (we get the same results if repeated), but not valid (it does not actually measure what we set out to measure).
Donald Ratcliff has written a useful and easy to read article on improving Validity and Reliability in Qualitative Research.
Alex Yu offers a more in-depth and technical discussion on Reliability and Validity, especially in the context of assessment.
A simple step-by-step guide to Using Excel for Analysing Survey Questionnaires has been developed by the University of Wisconsin Extension Programme Development and Evaluation to help you enter data and run simple analysis with Microsoft Excel (R).
For more extensive statistical analysis, you can consider specialist statisical software programmes such as SPSS or SAS.