EXPLAIN THE SCOPE AND USES OF MONITORING EVALUATION: DATA QUALITY
Data quality refers to the level of quality of Data. There are many definitions of data quality but data is generally considered high quality if, “they are fit for their intended uses in operations, decision making and planning.” (J. M. Juran). Alternatively, data is deemed of high quality if it correctly represents the real-world construct to which it refers. Furthermore, apart from these definitions, as data volume increases, the question of internal consistency within data becomes significant, regardless of fitness for use for any particular external purpose. The people’s views on data quality can often be in disagreement, even when discussing the same set of data used for the same purpose.
There are a number of theoretical frameworks for understanding data quality. A systems-theoretical approach influenced by American pragmatism expands the definition of data quality to include information quality, and emphasizes the inclusiveness of the fundamental dimensions of accuracy and precision on the basis of the theory of science (Ivanov, 1972). One framework, dubbed “Zero Defect Data” (Hansen, 1991) adapts the principles of statistical process control to data quality. Another framework seeks to integrate the product perspective (conformance to specifications) and the service perspective (meeting consumers’ expectations) (Kahn et al. 2002). Another framework is based in semiotics to evaluate the quality of the form, meaning and use of the data (Price and Shanks, 2004). One highly theoretical approach analyzes the ontological nature of information systems to define data quality rigorously (Wand and Wang, 1996).
A considerable amount of data quality research involves investigating and describing various categories of desirable attributes (or dimensions) of data. These lists commonly include accuracy, correctness, currency, completeness and relevance. Nearly 200 such terms have been identified and there is little agreement in their nature (are these concepts, goals or criteria?), their definitions or measures (Wang et al., 1993). Software engineers may recognize this as a similar problem to “ilities”.
Monitoring is the systematic and routine collection of information from projects and programmes for four main purposes:
• To learn from experiences to improve practices and activities in the future;
• To have internal and external accountability of the resources used and the results obtained;
• To take informed decisions on the future of the initiative;
• To promote empowerment of beneficiaries of the initiative.
Monitoring is a periodically recurring task already beginning in the planning stage of a project or programme. Monitoring allows results, processes and experiences to be documented and used as a basis to steer decision-making and learning processes. Monitoring is checking progress against plans. The data acquired through monitoring is used for evaluation.
Evaluation is assessing, as systematically and objectively as possible, a completed project or programme (or a phase of an ongoing project or programme that has been completed). Evaluations appraise data and information that inform strategic decisions, thus improving the project or programme in the future.
Evaluations should help to draw conclusions about five main aspects of the intervention:
Information gathered in relation to these aspects during the monitoring process provides the basis for the evaluative analysis.
Monitoring & Evaluation
M&E is an embedded concept and constitutive part of every project or programme design (“must be”). M&E is not an imposed control instrument by the donor or an optional accessory (“nice to have”) of any project or programme. M&E is ideally understood as dialogue on development and its progress between all stakeholders.
In general, monitoring is integral to evaluation. During an evaluation, information from previous monitoring processes is used to understand the ways in which the project or programme developed and stimulated change. Monitoring focuses on the measurement of the following aspects of an intervention:
• On quantity and quality of the implemented activities (outputs: What do we do? How do we manage our activities?)
• On processes inherent to a project or programme (outcomes: What were the effects /changes that occurred as a result of your intervention?)
• On processes external to an intervention (impact: Which broader, long-term effects were triggered by the implemented activities in combination with other environmental factors?)
The evaluation process is an analysis or interpretation of the collected data which delves deeper into the relationships between the results of the project/programme, the effects produced by the project/programme and the overall impact of the project/programme.
Table 19. Selecting the right mix of monitoring mechanisms
Data and Analysis Validation Participation
• M&E framework
• Progress and quarterly reports on achievement of outputs
• Annual Project Report
• Project delivery reports and combined delivery reports
• Substantive or technical documents: MDG Reports, National Human Development Reports, Human Development Reports
• Progress towards achieving outcomes and Standard Progress Reports on outcomes • Field visits
• Reviews and assessments by other partners
• Client surveys
• Reviews and studies
• Sectoral and outcome groups and mechanisms
• Steering committees and mechanisms
• Stakeholder meetings
• Focus group meetings
• Annual review
Importance of Monitoring and Evaluation
Although evaluations are often a retrospective, their purpose is essentially forward looking. Evaluation applies the lessons and recommendations to decisions about current and future programmes. Evaluations can also be used to promote new projects, get support from governments, raise funds from public or private institutions and inform the general public on the different activities.
The Paris Declaration on Aid Effectiveness in February 2005 and the follow-up meeting in Accra underlined the importance of the evaluation process and of the ownership of its conduct by the projects’ hosting countries. Many developing countries now have M&E systems and the tendency is growing.
It would be difficult to find someone who thinks that data quality isn’t important. Certainly, the effects of poor data quality are painfully clear: organizations depend on data to make strategic management decisions, provide customer service, and develop processes and timelines. If that data is obsolete, inconsistent, incoherent, or just plain wrong, it can cost a company time, customers, and revenue. Additionally, demonstrating data quality is often a requirement for regulatory compliance.
Trying to develop an overarching program to maintain and improve data quality can feel like chasing ghosts. In this article, we will present important concepts essential to a successful data quality program.
Broadly, quality data is “fit for use”: it can be trusted and it is suitable for its intended purpose. Assessing whether a specific set of data meets the criteria requires answering several questions: What data is being used, who is using it, how are they using it, when are they using it, and why? This becomes more complex as organizations begin sharing data across lines of business, departments, and other entities. It quickly becomes clear that to measure data quality effectively, it must be defined at the entity or even at the attribute level.
Data quality can be measured in many dimensions, including accuracy, reliability, timeliness, relevance, completeness, and consistency. Of course, different organizations will have different priorities. However, it’s important to recognize that there are technical and business views of data quality, and both are important. Data that meets technical quality standards (such as consistent, correctly formatted, well-defined) but that is not perceived by users as reliable, accurate, or useful will have little impact on the organization. In short, ensuring data quality requires an awareness of both technical and business requirements.
• “Data Quality: High-impact Strategies – What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors”. Retrieved 5 February 2013.
• • Glossary of data quality terms published by IAIDQ
• • Government of British Columbia
• • REFERENCE-QUALITY WATER SAMPLE DATA: NOTES ON ACQUISITION, RECORD KEEPING, AND EVALUATION
• • istabg.org Data QualYtI – Do You Trust Your Data?
• • GS1.ORG dqf
• • http://www.information-management.com/issues/20060801/1060128-1.html
• • http://www.directionsmag.com/article.php?article_id=509
• • http://ribbs.usps.gov/move_update/documents/tech_guides/PUB363.pdf
• E. Curry, A. Freitas, and S. O’Riáin, “The Role of Community-Driven Data Curation for Enterprises,” in Linking Enterprise Data, D. Wood, Ed. Boston, MA: Springer US, 2010, pp. 25-47.