Citizen Science and Community-Based Monitoring
1. Outcome
This Circular provides guidance on incorporating citizen science and community-based monitoring data into ocean accounts, with particular attention to quality assurance for non-official data sources and compilation procedures for integrating volunteer-collected data with official statistics. Readers will gain an understanding of the types of marine citizen science programs that can generate useful data for ocean accounting, the quality considerations that must be addressed when working with such data, approaches for integrating non-official data sources with official statistics, and concrete compilation procedures for processing citizen science observations into account-ready datasets.
The guidance recognizes that methods for incorporating citizen science data into official statistics are still developing, and standards in this area continue to evolve. Given this Emerging status, compilers should exercise appropriate caution when incorporating citizen science data into ocean accounts, clearly document the data sources and quality considerations, and be prepared to update approaches as methodological standards evolve.
Where relevant, this Circular identifies connections with traditional and indigenous knowledge systems, though the detailed treatment of these knowledge systems is provided in TG-3.6 Traditional Knowledge. Given the relevance of citizen science data to ecosystem condition monitoring, familiarity with ecosystem accounting concepts from TG-3.3 Ecosystem Accounts is recommended. The quality assurance framework applicable to all ocean accounting data, including citizen science contributions, is established in TG-0.7 Quality Assurance. Key terms used in this Circular are defined in TG-0.6 Glossary. Indicators derived from citizen science monitoring programs may support the indicator frameworks described in TG-2.1 Biophysical Indicators.
Citizen science data may contribute to multiple account types within the ocean accounting framework. TG-3.5 Ecosystem Condition addresses ecosystem condition accounts where citizen science reef monitoring, beach surveys, and biodiversity observations can supplement or validate official monitoring data. TG-6.12 Marine Litter specifically addresses beach litter surveys conducted by volunteer groups as a source for marine pollution accounts. The broader framework for social accounts, including community wellbeing and participatory assessment, is addressed in TG-3.5 Social Accounts.
2. Requirements
This Circular requires familiarity with:
-
TG-0.1 General Introduction to Ocean Accounts -- provides foundational understanding of Ocean Accounts components and the relationship between environmental and economic accounting frameworks.
-
TG-0.7 Quality Assurance -- establishes the general quality framework applicable to ocean accounting data, including the quality dimensions and metadata standards that must be applied when assessing fitness for purpose of citizen science contributions.
-
TG-3.3 Ecosystem Accounts -- for the ecosystem condition and extent accounting frameworks to which citizen science data on marine biodiversity and habitat condition can contribute, including the biophysical indicators and tiered measurement approaches referenced in this Circular.
3. Guidance Material
Citizen science and community-based monitoring represent important potential sources of data for ocean accounts, particularly in contexts where official monitoring networks have limited spatial or temporal coverage. The Framework for the Development of Environment Statistics (FDES 2013) recognizes that environment statistics draw upon diverse data sources, including "scientific research and special projects undertaken to fulfil domestic or international demand"[1]. In the marine context, citizen science programs can provide valuable observations on coastal conditions, marine species distributions, and environmental quality that may complement or extend official data collection efforts.
However, the integration of citizen science data into official statistical frameworks involves significant methodological considerations. The SEEA Ecosystem Accounting (SEEA EA) emphasizes that "appropriate review and validation of the data will be required, including, for example, consideration of the various measurement concepts and scopes that have been applied, to ensure that the data are suitable for the purposes of ecosystem accounting and that coherence across the accounts can be achieved"[2]. This principle applies with particular force to non-official data sources, where measurement methods may not conform to established statistical standards.
This section provides guidance on four key aspects of working with citizen science and community-based monitoring data: the types of programs that generate relevant data, the quality considerations that must be addressed, approaches for integration with official statistics, and compilation procedures for processing citizen science observations into account-ready datasets. For general principles of data quality applicable across all ocean accounting work, see TG-0.7 Quality Assurance.
3.1 Types of Citizen Science Programs
Marine citizen science encompasses a broad range of programs in which volunteers participate in scientific data collection. These programs vary considerably in their structure, the types of data collected, and their potential relevance to ocean accounting. For the purposes of this guidance, citizen science programs can be organized into several categories based on their primary focus and methodology.
Several established programs in the Pacific region illustrate the range and maturity of marine citizen science. The Reef Check program, active across Pacific Island countries including Fiji, Palau, and the Federated States of Micronesia, trains volunteer divers to conduct standardized surveys of coral reef health using globally consistent protocols[3]. CoralWatch, based at the University of Queensland, engages citizen scientists in monitoring coral bleaching across the Indo-Pacific using a standardized colour reference chart that reduces observer variability[4]. In Australia, the Reef Life Survey program coordinates volunteer SCUBA divers to conduct fish and invertebrate surveys using methods benchmarked against professional scientific surveys, and has been shown to generate data of comparable quality to expert-collected data across multiple taxa[5]. These programs demonstrate that well-designed citizen science can produce data suitable for monitoring ecosystem condition variables relevant to ocean accounts.
Biological monitoring programs
Many marine citizen science programs focus on documenting the presence, abundance, or condition of marine species. These include:
- Reef monitoring programs -- Volunteers conduct standardized surveys of coral reef health, recording coral cover, bleaching events, and associated fauna. Programs such as Reef Check have developed standardized protocols that enable consistent data collection across sites and time periods[3:1].
- Seabird and marine mammal counts -- Coastal bird counts and marine mammal sightings programs engage volunteers in systematic observations that contribute to population monitoring. These programs often have long histories that provide valuable time series data[6].
- Fish identification and abundance surveys -- Recreational divers and snorkelers contribute observations of fish species and abundance, particularly in areas of high recreational use such as marine parks[5:1].
- Intertidal zone surveys -- Volunteers survey rocky shores, mudflats, and other intertidal habitats, documenting species assemblages and environmental conditions[7].
These biological monitoring programs can potentially contribute to ecosystem condition accounts under the SEEA EA framework, particularly for variables relating to species diversity and population status. The Technical Guidance on Biophysical Modelling for SEEA Ecosystem Accounting notes that "in situ monitoring and accuracy assessments of ecosystem services maps" are needed, and that "standardized approaches for in situ monitoring of ecosystem services is even less well established than modelling approaches"[8]. For detailed guidance on ecosystem condition measurement, see TG-3.5 Ecosystem Condition.
Environmental quality monitoring
Citizen science programs also collect data on environmental conditions that may be relevant to ocean accounts:
- Water quality monitoring -- Community groups collect samples and measurements of parameters such as temperature, salinity, turbidity, and nutrient levels. The FDES notes that "field-monitoring stations, especially those monitoring concentrations of pollutants in the environmental media, are usually located in 'hot spot' areas with high levels of pollution"[9], suggesting that citizen science monitoring may help fill spatial gaps in official networks.
- Marine debris surveys -- Beach cleanup programs systematically document the quantity and types of marine debris, providing data on pollution pressures on coastal ecosystems[10]. Such data may be relevant to indicators discussed in TG-2.1 Biophysical Indicators and directly support marine litter accounts in TG-6.12 Marine Litter.
- Phytoplankton and harmful algal bloom monitoring -- Volunteers collect water samples to monitor for harmful algal blooms that can affect human health and fisheries[11].
Observational reporting
Less structured citizen science approaches rely on opportunistic observations:
- Species sighting platforms -- Online platforms and mobile applications enable members of the public to report sightings of marine species, contributing to distribution records and phenological data[12].
- Environmental incident reporting -- Community members report pollution events, strandings, and other environmental incidents that may not be captured by official monitoring[13].
3.2 Data Quality Considerations
The use of citizen science data in ocean accounts requires careful attention to data quality. The UN National Quality Assurance Framework Manual for Official Statistics (UN NQAF) establishes the quality dimensions that apply to official statistics[14]. While citizen science data are not official statistics, these quality dimensions provide a useful framework for assessing fitness for purpose. The quality framework for ocean accounts is described in TG-0.7 Quality Assurance. As the UN NQAF is periodically updated, compilers should verify that quality dimension definitions and assessment procedures remain current against the latest published version.
Accuracy and reliability
The SEEA EA Technical Guidance on Biophysical Modelling emphasizes that "decision makers are more likely to incorporate science into their decision-making if it is perceived as credible"[15]. For citizen science data, accuracy concerns arise from several sources:
- Observer variability -- Volunteers may have varying levels of experience and skill in species identification and measurement. Programs that provide training and certification can reduce but not eliminate this variability[16].
- Sampling bias -- Citizen science observations are often concentrated in accessible locations and favorable conditions, leading to non-random spatial and temporal coverage[17].
- Measurement protocols -- Even with standardized protocols, variations in how volunteers interpret and apply instructions can introduce measurement error[18].
The SEEA EA guidance notes that "unless detailed parameterization and validation with measured data has been conducted, outputs of ecosystem services models should be seen as best estimates, rather than absolute values"[19]. This principle applies equally to citizen science data -- the data should be understood as estimates subject to uncertainty, and this uncertainty should be characterized and communicated.
Validation approaches
Several approaches can be used to validate citizen science data:
- Expert verification -- Observations, particularly species identifications, can be verified by experts before being incorporated into databases[20].
- Comparison with official data -- Where citizen science monitoring overlaps with official monitoring sites, comparisons can provide insight into data quality and potential biases[21].
- Internal consistency checks -- Automated screening can flag unusual observations for review, such as species reported outside their known range or measurements falling outside expected bounds[22].
- Photographic documentation -- Requiring photographic evidence for observations enables verification and creates a permanent record[23].
The FDES notes that data from scientific research and special projects typically "(i) are usually available at no or low cost, (ii) minimize the response burden, (iii) can be used to address data gaps and (iv) are useful for developing coefficients for models"[24]. These advantages apply to citizen science data, but must be weighed against the need for additional quality assurance effort.
Uncertainty characterization
Given the inherent uncertainties in citizen science data, characterizing and communicating uncertainty is essential. The Technical Guidance on Biophysical Modelling describes approaches for assessing uncertainty, including "uncertainty matrices, which outline possible sources of uncertainty for each model"[25]. For citizen science data, uncertainty characterization should address:
- Known biases -- Document any systematic biases in coverage, timing, or measurement that may affect the data[26].
- Precision estimates -- Where feasible, estimate the precision of measurements based on replicate observations or comparison with reference data[27].
- Completeness -- Document the spatial and temporal coverage of the data and identify significant gaps[28].
As an illustrative example, consider a coral reef monitoring program using volunteer divers. An uncertainty characterization for such a program would document: (a) spatial bias toward accessible reefs near population centres, with remote reef systems under-represented; (b) temporal bias toward calm-weather survey periods, potentially missing seasonal variation; (c) observer variability in coral identification, estimated through replicate surveys at calibration sites; and (d) completeness gaps, noting the proportion of the target reef area surveyed and the frequency of repeat visits. This characterization enables users to assess whether the data are fit for purpose for specific accounting applications -- for example, the data may be suitable for tracking broad trends in coral condition at monitored sites (Tier 2 integration) while being insufficient for precise estimates of total reef extent (which would require Tier 3 calibration against remote sensing or professional survey data).
3.3 Integration with Official Statistics
Integrating citizen science data with official statistics requires establishing clear methodological protocols that maintain the integrity of official data while leveraging the potential of citizen science contributions.
Tiered integration approaches
The Technical Guidance on Biophysical Modelling describes a tiered approach to data quality that may be adapted for citizen science integration:
- Tier 1 approaches use citizen science data for awareness-raising, education, and analysis of broad patterns, without integration into official accounts[29].
- Tier 2 approaches use citizen science data to supplement official data where coverage is limited, with appropriate uncertainty characterization and metadata[30].
- Tier 3 approaches involve formal calibration of citizen science data against official monitoring, with demonstrated statistical equivalence enabling direct integration[31].
The appropriate tier depends on the specific application, the quality of the citizen science data, and the availability of official data. Most current applications of citizen science data in environmental accounting operate at Tier 1 or Tier 2 levels.
Table 3.3.1 summarizes the characteristics, quality requirements, and typical applications of each tier.
Table 3.3.1: Citizen Science Data Quality Tiers for Ocean Accounts
| Tier | Role in Accounts | Quality Requirements | Example Applications |
|---|---|---|---|
| Tier 1: Awareness | Gap identification, hypothesis generation | Minimal validation | Species sightings, pollution reports |
| Tier 2: Supplementary | Fill data gaps with caveats | Moderate validation, uncertainty documented | Beach litter surveys, habitat mapping |
| Tier 3: Formal | Direct account input | Full validation, calibrated against official data | Reef Check surveys, standardized bird counts |
Metadata and provenance
The SEEA EA guidance emphasizes that "a data provenance system improves users' ability to understand the fitness for purpose of data sets"[32]. When incorporating citizen science data, metadata should document:
- The citizen science program and its protocols
- Training and quality assurance procedures applied
- Any validation or calibration conducted
- Known limitations and biases
- The relationship between citizen science data and any official data used in the same accounts
The FAIR principles (Findable, Accessible, Interoperable, Reusable) provide guidance on data management that is particularly relevant for citizen science data[33]. See TG-0.7 Quality Assurance for discussion of metadata standards in ocean accounting.
Distinguishing data sources
Where citizen science data are used in ocean accounts, they should be clearly distinguished from official data sources. The accounts should indicate which estimates are based wholly or partly on citizen science data, enabling users to assess fitness for purpose for their specific needs[34].
3.4 Community-Based Monitoring
Community-based monitoring represents a distinct form of participatory data collection that involves ongoing engagement by local communities in environmental observation. Unlike project-based citizen science, community-based monitoring often reflects long-term community interest in local resources and may incorporate traditional knowledge and practices.
Coastal community monitoring
In many coastal contexts, fishing communities, coastal residents, and marine resource user groups conduct informal or semi-formal monitoring of local conditions. This monitoring may encompass:
- Observations of fish abundance and species composition
- Documentation of changes in coastal habitats
- Recording of environmental conditions relevant to fishing and other livelihoods
- Monitoring of resource access and use
The Taskforce on Nature-related Financial Disclosures (TNFD) recommendations recognize the importance of engagement with "Indigenous Peoples, Local Communities and affected stakeholders" in assessment of nature-related issues[35]. While the TNFD framework addresses corporate disclosure rather than official statistics, the underlying principle -- that local communities possess valuable knowledge about their environments -- applies equally to ocean accounting.
Several Pacific Island community-based monitoring programs illustrate the potential for contributing to ocean accounts. In Fiji, the Locally-Managed Marine Area (LMMA) Network coordinates community-based monitoring of coral reef and coastal fisheries across more than 400 communities, using standardized protocols adapted to local capacity and generating long-term time series data on reef fish abundance and coral cover[36]. In Samoa, village-based fisheries monitoring programs have documented changes in coastal fish stocks using methods combining traditional fisher knowledge with structured survey protocols[37]. In Palau, community monitoring of giant clam populations has provided data used in both fisheries management and ecosystem condition assessment[38]. These programs demonstrate that sustained community engagement can yield data of sufficient quality and consistency to support accounting applications, particularly at Tier 2 level.
Integration with traditional knowledge
Community-based monitoring often draws upon, or is informed by, traditional ecological knowledge. The relationship between community-based monitoring and traditional knowledge systems is complex and requires respectful engagement with knowledge holders. Detailed guidance on working with traditional knowledge is provided in TG-3.6 Traditional Knowledge.
Key principles for integration include:
- Ensuring free, prior, and informed consent for use of community knowledge[39]
- Respecting intellectual property and cultural protocols associated with traditional knowledge[40]
- Acknowledging the sources of community-derived data in accounts metadata[41]
- Building reciprocal relationships that benefit communities as well as accounting efforts[42]
The TNFD recommendations include guidance on engagement of Indigenous Peoples, Local Communities and affected stakeholders, emphasizing that organizations should "describe the Indigenous Peoples, Local Communities and affected stakeholders engaged in the assessment and management of nature-related dependencies, impacts, risks and opportunities, how they were identified, and a confirmation that this description has been agreed with those engaged"[43].
3.5 Compilation Procedures
This section provides practical guidance on the procedures for compiling citizen science data into ocean accounts, addressing data acquisition, processing, quality control, and integration steps.
Data acquisition and screening
The first stage of compilation involves acquiring citizen science data and conducting initial screening for quality issues:
Step 1: Identify relevant programs -- Survey available citizen science and community-based monitoring programs operating in the accounting area. For marine contexts, priority programs typically include:
- Beach litter surveys (e.g., Ocean Conservancy International Coastal Cleanup, OSPAR Beach Litter Monitoring)
- Reef monitoring programs (e.g., Reef Check, CoralWatch)
- Marine species sighting platforms (e.g., iNaturalist, eBird for seabirds)
- Water quality monitoring networks (community-based programs)
Step 2: Request data and documentation -- Contact program coordinators to request:
- Raw observational data (species counts, condition scores, litter counts, water quality measurements)
- Program protocols and standard operating procedures
- Training materials and observer qualifications
- Quality control procedures applied by the program
- Metadata on survey locations, dates, and conditions
Step 3: Conduct initial screening -- Apply automated checks to flag obvious errors:
- Values outside physically plausible ranges (e.g., negative counts, impossible measurements)
- Locations outside the survey area or on land (for marine observations)
- Dates outside the survey period or in the future
- Duplicate records
Remove flagged records or query program coordinators for clarification. Document the number and proportion of records removed at this stage.
Bias assessment and correction
Step 4: Assess spatial bias -- Map the distribution of citizen science observations and compare with the desired coverage:
- Calculate the density of observations per unit area
- Identify areas with high and low coverage
- Compare coverage with population density, road networks, and accessibility indicators to identify biases
For reef monitoring as an example, observations may be heavily concentrated near dive shops and tourist areas. Document this spatial bias in metadata and, where necessary, stratify the dataset to avoid over-weighting accessible sites.
Step 5: Assess temporal bias -- Analyze the temporal distribution of observations:
- Plot observations by month, season, and year
- Identify periods with high and low coverage
- Compare with weather patterns and program schedules
Beach litter surveys, for instance, may be concentrated during organized cleanup events, missing inter-event accumulation. Document temporal bias and consider time-stratified sampling for trend estimation.
Step 6: Apply bias corrections where justified -- For some applications, statistical corrections for known biases may be appropriate:
- Distance from access point corrections for spatial bias
- Weather condition corrections for temporal bias
- Occupancy modeling for detection probability (for species sightings)
Apply corrections conservatively and document methods. Where correction is not feasible, restrict use of the data to applications where the bias does not compromise fitness for purpose (e.g., use spatially biased data only for trend monitoring at sampled sites, not for total stock estimation).
Validation and calibration
Step 7: Cross-validate with official data -- Where citizen science and official monitoring overlap in space and time:
- Extract paired observations from both sources
- Calculate correlation coefficients and regression relationships
- Assess systematic biases (e.g., consistent under- or over-estimation)
- Estimate measurement error variances
For Tier 3 integration, calibration functions can be derived from these comparisons. For example, if citizen science beach litter counts are consistently 80% of professional survey counts (due to incomplete area coverage), a calibration factor of 1.25 can be applied with documented uncertainty.
Step 8: Expert review of flagged observations -- For observations that fail consistency checks but may be valid:
- Compile sets of unusual observations (e.g., rare species, extreme values)
- Request expert review from taxonomists or subject matter specialists
- Verify observations with photographic evidence where available
- Accept, reject, or flag as uncertain based on expert judgment
Document the number of observations reviewed and the proportion accepted, rejected, or flagged.
Dataset preparation and integration
Step 9: Aggregate to accounting units -- Transform point observations to the spatial and temporal units used in accounts:
- For ecosystem condition accounts: aggregate observations to Basic Spatial Units (BSUs) or Ecosystem Accounting Areas (EAAs)
- For time series accounts: aggregate to annual or sub-annual periods
- Calculate summary statistics (mean, median, standard deviation) for each unit
- Estimate uncertainty (standard errors, confidence intervals)
For example, citizen science reef condition observations would be aggregated to reef ecosystem polygons, with mean coral cover and associated confidence intervals calculated for each polygon.
Step 10: Compile metadata and provenance -- For each citizen science dataset integrated into accounts, compile metadata including:
- Program name and coordinating organization
- Survey methods and protocols
- Observer training and qualifications
- Quality control procedures applied
- Known biases and limitations
- Validation and calibration procedures
- Tier classification (Tier 1, 2, or 3)
- Period of data collection
- Number of observations and spatial coverage
This metadata should accompany the accounts as supplementary documentation.
Step 11: Distinguish in account tables -- In account compilation tables, clearly mark estimates derived from citizen science:
- Use separate columns or rows for citizen science-based estimates
- Apply footnotes indicating data source and quality tier
- Provide uncertainty ranges where citizen science data are used
For example, an ecosystem condition account might include a column for "Coral cover (official monitoring)" and a separate column for "Coral cover (citizen science, Tier 2)" with associated confidence intervals.
Worked example: Beach litter data compilation
A concrete worked example illustrates the compilation procedure. A statistical office compiling marine litter accounts (see TG-6.12 Marine Litter) receives beach litter survey data from the Ocean Conservancy International Coastal Cleanup program for 85 beach sites surveyed over a three-year period. The compilation steps are:
Step 1-2: Acquire data including litter counts by material category, beach lengths surveyed, and survey dates. Obtain program protocols specifying survey methods.
Step 3: Screen data, identifying and removing 8 records with impossible dates and 3 records with beach lengths exceeding known beach dimensions (0.97% of records removed).
Step 4: Map survey sites, revealing concentration in urban coastal areas (72% of surveys within 20km of cities > 50,000 population). Document spatial bias; restrict use of data to "beach litter density at surveyed sites" rather than extrapolating to all coastlines.
Step 5: Plot temporal distribution, confirming concentration on cleanup event dates (September International Coastal Cleanup Day = 58% of observations). Supplement with quarterly surveys at 12 sentinel sites to capture inter-event periods.
Step 6: No bias correction applied; instead, stratify reporting by urban/remote and event/non-event categories.
Step 7: Compare 12 sentinel sites with professional monitoring, finding citizen science counts average 85% of professional counts (95% CI: 78-92%). Derive calibration factor of 1.18 (=1/0.85) for Tier 3 use, with uncertainty range of 1.09-1.28.
Step 8: Expert review not required for litter counts (straightforward enumeration), but material categorization reviewed by 2 marine debris specialists for 10% of surveys, confirming 94% agreement.
Step 9: Aggregate to coastal administrative units (districts), calculating mean litter density (items per meter of beach) and total estimated stock (items) with confidence intervals reflecting sampling error and calibration uncertainty.
Step 10: Compile metadata documenting International Coastal Cleanup protocols, volunteer training (brief on-site instruction), known biases (urban concentration, event concentration), validation results (85% of professional counts), and Tier 2/3 classification (Tier 3 for calibrated data at overlapping sites, Tier 2 for data at non-overlapping sites).
Step 11: In marine litter stock account tables, present:
- Beach litter stock (official monitoring): 450,000 items (95% CI: 410,000-490,000) [12 sites]
- Beach litter stock (citizen science, Tier 3, calibrated): 1,850,000 items (95% CI: 1,630,000-2,070,000) [85 sites]
- Combined estimate using weighted average: 2,300,000 items (95% CI: 2,010,000-2,590,000)
This worked example demonstrates the practical steps and decision points in citizen science data compilation. Key principles illustrated include: transparent documentation of data processing decisions, restriction of data use to applications appropriate for the quality tier, derivation of calibration factors from overlap with official data, and clear communication of uncertainty.
3.6 Decision Use Cases for Citizen Science Data
This section describes specific decision contexts where citizen science data can support ocean accounting and related policy analysis, illustrating the practical utility of the compilation procedures described in Section 3.5.
Gap-filling for data-sparse coastal areas
Many coastal nations, particularly Small Island Developing States (SIDS), have limited resources for establishing comprehensive official monitoring networks. Citizen science can provide observations in areas that would otherwise have no data:
Use case: A Pacific SIDS conducts official reef monitoring at 8 sites, providing high-quality data but covering only 5% of reef area. Reef Check volunteer surveys add 45 sites, increasing coverage to 35% of reef area. While the volunteer data are less precise (Tier 2), they enable spatial stratification of condition estimates and identification of priority areas for management intervention. The combined dataset (official + citizen science) supports ecosystem condition accounts that would not be feasible with official data alone.
Decision support: Marine spatial planning authorities use the expanded dataset to identify degraded reef areas requiring protection and relatively intact areas suitable for tourism zoning. Without citizen science data, planning would proceed on the basis of limited site coverage or coarse remote sensing proxies.
Supplementing temporal resolution for trend detection
Official monitoring programs often operate on multi-year cycles due to resource constraints. Citizen science can provide higher-frequency observations:
Use case: A national coastal water quality program samples 30 estuaries biennially. A community-based monitoring network samples 15 of these estuaries monthly. The citizen science data capture seasonal variation and enable detection of short-term pollution events that would be missed by biennial sampling. When calibrated against the official biennial data (Tier 3 approach), the citizen science time series provide improved trend detection for nitrogen concentrations and turbidity.
Decision support: Watershed managers use the high-frequency citizen science data to identify seasonal pollution patterns linked to agricultural runoff timing, enabling targeted interventions. The biennial official data alone would not reveal these temporal patterns.
Beach litter monitoring for marine pollution accounts
Beach litter surveys are one of the most established applications of citizen science in marine environmental monitoring, with direct relevance to SDG 14.1 (marine pollution) and marine litter accounts (see TG-6.12 Marine Litter):
Use case: A coastal nation compiles marine litter accounts drawing on beach litter survey data from the International Coastal Cleanup and regional monitoring programs. Annual cleanup events engage 5,000-8,000 volunteers surveying 200-300 beach sites. Data are standardized using Ocean Conservancy protocols, enabling material categorization and source attribution (land-based vs. sea-based litter).
Decision support: The marine litter accounts, combining citizen science survey data with waste management statistics, reveal that single-use plastics from coastal tourism comprise 42% of beach litter by item count. This attribution supports targeted policy interventions including bans on specific single-use items and tourism operator education programs. The spatial distribution of citizen science observations identifies pollution hotspots requiring enhanced cleanup and enforcement.
This decision context is particularly relevant for SIDS where beach tourism is economically important and beach aesthetics affect visitor satisfaction. Citizen science provides cost-effective, large-scale monitoring that would be prohibitively expensive to conduct through official programs alone.
Reef condition reporting for ecosystem accounts
Coral reef condition is a priority indicator for many Pacific and Caribbean nations. Citizen science reef monitoring programs provide data for ecosystem condition accounts:
Use case: Reef Check volunteer diver surveys complement official scientific monitoring, providing observations from 60 reef sites compared to 15 official sites. Volunteers record coral cover, bleaching incidence, and fish abundance using standardized underwater survey forms. Data undergo expert validation (photographic verification of coral identifications) and calibration against professional surveys at 10 overlapping sites.
Decision support: The expanded spatial coverage enables ecosystem condition accounts stratified by reef type (fringing, barrier, atoll) and management status (protected vs. unprotected). Trend analysis reveals that condition is declining faster in unprotected areas, supporting expansion of marine protected area networks. The citizen science data provide the spatial coverage necessary for meaningful stratification that would not be possible with official monitoring alone.
These use cases illustrate that citizen science data serve distinct purposes in ocean accounting: gap-filling where official data are sparse, temporal supplementation where official data are infrequent, cost-effective large-scale monitoring for distributed pressures like beach litter, and spatial expansion for ecosystem condition assessment. In each case, appropriate quality assurance (bias assessment, calibration, validation) and transparent metadata documentation ensure that the data are fit for purpose for the intended decision context.
3.7 Emerging Practices and Future Development
The field of citizen science and community-based monitoring is evolving rapidly, with advances in technology, methodology, and institutional frameworks. Statistical offices and accounting practitioners should be aware of emerging developments that may expand the potential for citizen science contributions to ocean accounts.
Technology developments
Mobile applications, low-cost sensors, and image recognition technologies are making it easier for volunteers to collect and submit observations, while also enabling new forms of quality control[44]. Environmental DNA (eDNA) sampling, where volunteers collect water samples for laboratory analysis, represents a particularly promising development for marine biodiversity monitoring[45].
Methodological advances
Research is advancing on statistical methods for combining citizen science data with official monitoring data, accounting for known biases and uncertainty[46]. These methods, sometimes termed "data fusion" or "integrated modelling," may enable more rigorous integration of citizen science data into official accounts over time. Integrated species distribution models, which combine structured survey data with opportunistic citizen science observations within a single statistical framework, have shown particular promise for marine applications. Recent work on occupancy models that account for imperfect detection in volunteer surveys is enabling more robust estimation of species occurrence and abundance trends from citizen science datasets[47]. As these statistical methods mature and become more widely adopted, the potential for Tier 3 integration of citizen science data is likely to increase.
Institutional developments
Some national statistical offices are beginning to explore frameworks for incorporating citizen science and other non-traditional data sources into official statistics[48]. As these frameworks mature, clearer guidance on integration approaches is likely to emerge.
The SEEA EA Technical Guidance on Biophysical Modelling notes that "data quality frameworks developed by statistical agencies currently do not include standards for modelled data. These frameworks should be expanded to encompass the specific quality issues that arise from modelled data"[49]. The same observation applies to citizen science data -- existing quality frameworks were developed for traditional data sources and may need adaptation for the specific characteristics of volunteer-collected data.
4. Acknowledgements
This Circular has been approved for public circulation and comment by the GOAP Technical Experts Group in accordance with the Circular Publication Procedure.
Authors: [Names and affiliations]
Reviewers: [Names and affiliations]
5. References
United Nations Statistics Division (2017). Framework for the Development of Environment Statistics (FDES 2013). Studies in Methods, Series M, No. 92. New York: United Nations. Para 1.18. ↩︎
United Nations (2021). System of Environmental-Economic Accounting -- Ecosystem Accounting. New York: United Nations. Para 2.88. ↩︎
Hodgson, G. (1999). A global assessment of human effects on coral reefs. Marine Pollution Bulletin 38(5): 345-355. ↩︎ ↩︎
Siebeck, U.E. et al. (2006). Monitoring coral bleaching using a colour reference card. Coral Reefs 25(3): 453-460. ↩︎
Edgar, G.J. and Stuart-Smith, R.D. (2014). Systematic global assessment of reef fish communities by the Reef Life Survey program. Scientific Data 1: 140007. ↩︎ ↩︎
Dunn, E.H. et al. (2005). High priority needs for range-wide monitoring of North American landbirds. Partners in Flight Technical Series No. 2. ↩︎
Mieszkowska, N. et al. (2006). Marine Biodiversity and Climate Change: assessing and predicting the influence of climatic change using intertidal rocky shore biota. Occasional Publications, Marine Biological Association No. 20. ↩︎
United Nations (2022). Technical Guidance on Biophysical Modelling for SEEA Ecosystem Accounting. New York: United Nations. Para 383. ↩︎
United Nations Statistics Division (2017). Framework for the Development of Environment Statistics (FDES 2013). Studies in Methods, Series M, No. 92. Para 1.31. ↩︎
Hardesty, B.D. et al. (2017). Using numerical model simulations to improve the understanding of micro-plastic distribution and pathways in the marine environment. Frontiers in Marine Science 4: 30. ↩︎
Berdalet, E. et al. (2016). GlobalHAB: A new program to promote international research, observations, and modeling of harmful algal blooms in aquatic systems. Oceanography 29(1): 10-21. ↩︎
Sullivan, B.L. et al. (2014). The eBird enterprise: An integrated approach to development and application of citizen science. Biological Conservation 169: 31-40. ↩︎
Newman, G. et al. (2012). The future of citizen science: emerging technologies and shifting paradigms. Frontiers in Ecology and the Environment 10(6): 298-304. ↩︎
United Nations (2019). United Nations National Quality Assurance Frameworks Manual for Official Statistics. New York: United Nations Department of Economic and Social Affairs Statistics Division. ↩︎
United Nations (2022). Technical Guidance on Biophysical Modelling for SEEA Ecosystem Accounting. Para 359. ↩︎
Bonney, R. et al. (2009). Citizen science: a developing tool for expanding science knowledge and scientific literacy. BioScience 59(11): 977-984. ↩︎
Bird, T.J. et al. (2014). Statistical solutions for error and bias in global citizen science datasets. Biological Conservation 173: 144-154. ↩︎
Lewandowski, E. and Specht, H. (2015). Influence of volunteer and project characteristics on data quality of biological surveys. Conservation Biology 29(3): 713-723. ↩︎
United Nations (2022). Technical Guidance on Biophysical Modelling for SEEA Ecosystem Accounting. Para 369. ↩︎
Swanson, A. et al. (2016). A generalized approach for producing, quantifying, and validating citizen science data from wildlife images. Conservation Biology 30(3): 520-531. ↩︎
Kelling, S. et al. (2015). Can observation skills of citizen scientists be estimated using species accumulation curves? PLoS ONE 10(10): e0139600. ↩︎
Wiggins, A. et al. (2011). Mechanisms for data quality and validation in citizen science. IEEE Seventh International Conference on e-Science Workshops, pp. 14-19. ↩︎
Kosmala, M. et al. (2016). Assessing data quality in citizen science. Frontiers in Ecology and the Environment 14(10): 551-560. ↩︎
United Nations Statistics Division (2017). Framework for the Development of Environment Statistics (FDES 2013). Para 1.33. ↩︎
United Nations (2022). Technical Guidance on Biophysical Modelling for SEEA Ecosystem Accounting. Para 360. ↩︎
Isaac, N.J.B. et al. (2014). Statistics for citizen science: extracting signals of change from noisy ecological data. Methods in Ecology and Evolution 5(10): 1052-1060. ↩︎
Dickinson, J.L. et al. (2010). Citizen science as an ecological research tool: challenges and benefits. Annual Review of Ecology, Evolution, and Systematics 41: 149-172. ↩︎
Pocock, M.J.O. et al. (2017). A vision for global biodiversity monitoring with citizen science. Advances in Ecological Research 59: 169-223. ↩︎
United Nations (2022). Technical Guidance on Biophysical Modelling for SEEA Ecosystem Accounting. Para 53. ↩︎
Ibid., Para 54. ↩︎
Ibid., Para 55. ↩︎
Ibid., Para 374. ↩︎
Wilkinson, M.D. et al. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data 3: 160018. ↩︎
United Nations (2019). United Nations National Quality Assurance Frameworks Manual for Official Statistics. Chapter 8. ↩︎
Taskforce on Nature-related Financial Disclosures (2023). Recommendations of the Taskforce on Nature-related Financial Disclosures. September 2023. Section 5.5. ↩︎
Jupiter, S.D. et al. (2014). Locally-managed marine areas: multiple objectives and diverse strategies. Pacific Conservation Biology 20(2): 165-179. ↩︎
Friedlander, A.M. et al. (2010). The state of coral reef ecosystems of American Samoa. The State of Coral Reef Ecosystems of the United States and Pacific Freely Associated States: 2008, pp. 307-351. ↩︎
Kitalong, A. and Dalzell, P. (1994). A preliminary assessment of the status of inshore coral reef fish stocks in Palau. Inshore Fisheries Research Technical Document No. 6. South Pacific Commission. ↩︎
TNFD (2023). Recommendations of the Taskforce on Nature-related Financial Disclosures. Guidance on engagement with Indigenous Peoples, Local Communities and affected stakeholders. ↩︎
Convention on Biological Diversity. Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization. ↩︎
United Nations Declaration on the Rights of Indigenous Peoples (2007). Article 31 on intellectual property. ↩︎
Johnson, N. et al. (2016). The contributions of community-based monitoring and traditional knowledge to Arctic observing networks: Reflections on the state of the field. Arctic 69(Suppl. 1): 28-40. ↩︎
TNFD (2023). Recommendations of the Taskforce on Nature-related Financial Disclosures. Governance Disclosure C. ↩︎
Jetz, W. et al. (2019). Essential biodiversity variables for mapping and monitoring species populations. Nature Ecology & Evolution 3: 539-551. ↩︎
Deiner, K. et al. (2017). Environmental DNA metabarcoding: Transforming how we survey animal and plant communities. Molecular Ecology 26(21): 5872-5895. ↩︎
Isaac, N.J.B. et al. (2020). Data integration for large-scale models of species distributions. Trends in Ecology & Evolution 35(1): 56-67. ↩︎
Kelling, S. et al. (2019). Using semistructured surveys to improve citizen science data for monitoring biodiversity. BioScience 69(3): 170-179. ↩︎
Eurostat (2020). Experimental statistics at Eurostat -- Trusted smart surveys and big data. Luxembourg: European Commission. ↩︎
United Nations (2022). Technical Guidance on Biophysical Modelling for SEEA Ecosystem Accounting. Para 380. ↩︎