This is a description of the extractors and converters that we plan to implement next at the UMD iSchool. These reflect priorities around curating mostly born digital archival collections.
action | input
| output format / fields | use case | notes |
---|---|---|---|---|
Extract |
| schema schema-type (DTD/XSD) well-formed? schema retrievable? valid? | Meaningful search over XML files in the archives will hinge on the schema employed. By extracting at the schema we can index it. This is analogous to file characterization within the XML world. | |
Extract |
| geospatial bounding box | Allows geospatial search and discovery of relevant archives. | |
Extract |
| geospatial bounding box | Allows geospatial search and discovery of relevant archives. | |
Extract |
| title hyperlinks (href, text) | Allows us to create an index of all of the web pages in a web archive of a site for a federal agency, etc.. Text of links can be used to describe the page referenced, becoming additional keywords. | Let's us try pagerank scoring in archives. |
Extract |
| content based creation date dates in content | Often files that have been moved through archival deposit workflows or have been moved from computer to computer prior to deposit will no longer have good metadata on the creation date of a document. The algorithm would produce a best guess at a document creation date, based on the various dates used in the text. |
proposing the UMD build some of the following extractors, let me know what you think..
XML - is it well-formed? what is the schema/DTD? is it valid?
KML - basic geographic stuff (bounding box?) and embedded metadata fields
SHP/SHX (QGIS) - see above
HTML - page title, link text and hrefs