Configuring data fetchers
Overview
DSAPI2 provides a framework for fetching, parsing, and tokenizing external data sources to produce streams.
An external fetcher is a standalone Java application intended to be invoked periodically by a server, e.g., as a cron job. There is a single entry point, and a properties-based configuration determines which classes are used for fetching, parsing, and filtering stream tokens. This is a simple form of dependency injection of the sort that is often done with more complex frameworks such as Spring.
Basic configuration
Several properties are generic and apply to all fetcher and parser implementations. These are:
fetcher.class - the fully-qualified name of the class that will be used to fetch data
fetcher.realtime - if true, only the most recent token from each execution will be written to the stream; if false, all tokens produced from each execution will be written to the stream
fetcher.delay - how long to wait between executions (in milliseconds)
parser.class - the fully-qualified name of the class that will be used to parse data (some parser implementations may ignore the fetcher and perform the fetching themselves)
date.extractor.class - the fully-qualified name of the class that will be used to extract dates (some parsers may ignore the date extractor and perform timestamping themselves)
stream.assigner.class - the fully-qualified name of the class that will determine which stream tokens will be written to (some parsers may ignore the stream assigner)
Example configuration: Twitter
DSAPI2 includes a Twitter parser. It ignores its fetcher, because Twitter-specific-parameters determine what URL needs to be fetched. The following example is annotated to describe each parameter.
# the fetcher is ignored, but since we must instantiate something, an HTMLFetcher is used
fetcher.class = edu.uiuc.ncsa.datastream.util.fetch.fetcher.HTMLFetcher
# non-realtime, because we're performing a Twitter search
fetcher.realtime = false
# wait 5 minutes between fetches (Twitter is rate-limited)
fetcher.delay = 300000
# the fetcher is ignored, but since we must instantiate something, an HTMLFetcher is used fetcher.class = edu.uiuc.ncsa.datastream.util.fetch.fetcher.HTMLFetcher # non-realtime, because we're performing a Twitter search fetcher.realtime = false # wait 5 minutes between fetches (Twitter is rate-limited) fetcher.delay = 300000 parser.class = edu.uiuc.ncsa.datastream.util.fetch.dataparser.TwitterParser # the following four parameters are OAuth authentication parameters # the example values are not valid; do not attempt to authenticate with them parser.twitter.key = qeS5HHN1s69Xrz2SqtJISQ parser.twitter.secret = sXcEHIlzMqDSsfRrUNe8D4bGOObxsqidmknpmBn8I parser.twitter.token = 61353510-9TUfOSHMdQWSklTzpV23kCqrnK23ev2WdFzlvNP1F parser.twitter.tokenSecret = nHyQR6Mi5zZvgptZOPDr0JjqGnoASbvyW5wAa5bKBE # the next few parameters specify a query against Twitter's query API # for documentation on query syntax see http://search.twitter.com/api/ # this is the query itself. "car" means search for tweets containing the word "car" parser.twitter.query = car # here we specify a geographic centroid parser.twitter.lat = 40.116349 parser.twitter.lon = -88.239183 # and a radius parser.twitter.radius = 30 # in miles. this is the geographic region in which to search parser.twitter.distanceUnits = miles # the date extractor is ignored; the twitter4j API performs date extraction for us date.extractor.class = edu.uiuc.ncsa.datastream.util.fetch.dateparser.SimpleDateExtractor # here we're putting all search results into a single stream stream.assigner.class = edu.uiuc.ncsa.datastream.util.fetch.ConstantStreamAssigner # the URI of the stream. this can be any valid URI stream.assigner.constant.stream = urn:streams/snorb7/twitter # TypeRegisterFilter allows us to associate a content type with token data onetimefilter.class = edu.uiuc.ncsa.datastream.util.fetch.filter.TypeRegisterFilter # in this case tweets are of type text/plain filter.typeregister.mime = text/plain