posted Nov 4, 2010, 3:19 AM by Hervé Caumont
updated Nov 19, 2010, 8:01 AM
- Some discussions in preparation of the AIP-3 presentation to the CEN TC/287 Workshop
- Review of the WG Action items, plus recent contribution from Aston University
- Interest raised in further addressing issues related to resolution and scale topics
Dan Cornford – Aston University
Will Pozzi – Center for Research in Environment and Water
Hervé Caumont – OGC / ERDAS
- Review of the ongoing action items
- July 8th : create a category in GEOSS BPW to host current discussion items on ontologies in support of Data Harmonization tasks
- Sept. 9th : ‘mapping’ relevant encoding standard onto actual QA4EO best practices. Could be done within SBAs, seen as domains having specific needs. AIP could look for the ‘common ground’ in terms of encoding standards
- Sept. 9th : coordinate with QA4EO team, to look forward case studies from AIP, underpining that some of the actual AIP work matches already a given QA4EO BP or Procedure
- Sept. 10th : Find a date to convene the next joint AIP-3 / DA-09-01b telecon
- Sept. 10th : Update the AIP-3 DH WG pages with GIGAS recommendations
- Sept. 10th : Further discuss with GIGAS contributors their potential follow-on on Q4 2010 (Clemens and Andrew indicated progress on GIGAS Paper)
- Sept. 10th : Illustrate possible usages of O&M to ‘cross-link’ a SOS and a WCS that a client application can understand and follow
- Sept.10th : Updade the GEOSS Best Practices wiki from the current point of view « QA4EO implementation for Capacity Building », to a more AIP-oriented, encoding standards focused, aimed at the GEOSS/GCI users chain (service providers, application developpers, end-users)
- Quality assurance for Data Harmonization
- Brad's Video presents well the general "GCI super-structure" issue
- QA4EO recommendations are quite "metrology based"... there is need for "in field" qualification. Is there a document from QA4EO on this approach ? Cf. topological consistency... QA4EO is often based on the concept of repeatable events, and labs based studies. The problem of extending QA4EO to e.g. non-repeatable measurements appears as a quite difficult problem. There is some discussion of Bayesian approaches to post-launch quality assessment but these might require some updating.
- Need also to well describe data products levels :
- Level 1 / Primary dissemination imagery formats : data products are generally input to other production tasks. Quality and usability criteria must support quantitative assessment, QA4EO gives an emphasis here, but it must be borne in mind the pre-launch quality assessment must be validated by rigorous post-launch in field assessment.
- Uncertainty on the Models (as in-between processing) - Cf. GEO Model Web task, UncertWeb project...
- Level 2+ / Secondary dissemination imagery formats: often end-user products, quality and usability criteria can be slightly different, notably qualitative assessment. Check NetMar, UncertWeb, GeoViQua ...
- Markup Languages and Ontologies for Data Harmonization
- CUASHI Controlled Vocabulary is very water-quality oriented. Not much anchors for earth observations... How to expand such ontology for Drought e.g. ?
- Global drought monitoring : scales, different levels... Soil moisture
for example... Do we see this scale concept in the ontologies today ? It is very vague... It is a new subject and research domain indeed... And for what use of the product... as presented to end-users. Concrete example on agricultural models, field plots in china are managed at 'meter' resolution, farms in US or Russia are rather managed at '250m' resolution
- Registers and GCI resources supporting Data Harmonization
- Judgement of the user : indeed the term 'computing trust' can lead to a view af 'automated decisions', but we agree that the man in the loop is foressen
- Need to address some GMES-related examples. Discussion on statistical data for modelling dry deposition rates of atmospheric acidifying components into forest ecosystems...