This page should apply to all geodashboard projects.
Parsers built for GLTG
Repositories
The legacy repository for GLTG parsers is located in https://opensource.ncsa.illinois.edu/bitbucket/projects/GEOD/repos/gltg-parsers-py/browse. Some of the parser sources have been updated recently and some of them have not been touched for years.
These should be migrated to https://opensource.ncsa.illinois.edu/bitbucket/projects/GEOD/repos/pygeotemporal-parsers/browse on update
Overview of GLTG parsers
- user = parsers (no password)
- root directory = /home/parsers
- directory structure
- 4 directories for 4 systems
- 3 parsers for 3 sources for each system
- 4 directories for 4 systems
- run parsers
- each system has a shell script that runs all three sources sequentially
- for each source all data is parsed first, then a subprocess runs the binning with a wait on subprocess until finished. When done the next source parser starts. No timeout between source parsers.
- cronjobs
- get greon data from gltg.ncsa.illinois.edu:/var/opt/CampbellSci/loggernet_ordered
- runs in as marcuss user (this can be changed but for now it is due to permissions on loggernet on gltg)
/home/marcuss/get_greon_data.sh
uses rsync to pull data to /home/marcuss/data/greon/ then copies to/home/parsers/greon-data/
- parsers
- 4 lines for 4 systems
- get greon data from gltg.ncsa.illinois.edu:/var/opt/CampbellSci/loggernet_ordered
General processes that take significant time
- updating sensor statistics - required at end of parsing to update start and end times https://opensource.ncsa.illinois.edu/bitbucket/projects/GEOD/repos/pygeotemporal/browse/pygeotemporal/sensors.py#239
- maybe it does more but not sure
- maybe the query can be simplified
System Parsing Times
Here the each source parses the data then waits for the subprocess that bins the data until finished. When the binning finishes, the the next source parser starts. No timeout between parsers.
- gltg-dev
- resources
- nebula, proxy 2cpu 4ram
- nebula, postgres (4 CPU, 8G RAM)
- times by source
- greon .5h
- iwqis 1h
- usgs 1h 24m failed presumably do to http timeout
- resources
- gltg
- resources
- sd stack, proxy 4cpu 6ram
- nebula, postgres (4 CPU, 8G RAM)
- times by source
- greon 6m
- iwqis 40m
- usgs 3h 10m
- however, geostreams not responding for long time
- resources
- ilnlrs-dev
- major problem - authentication of cache client timing out
- added a try with 10minute wait if timed out and try again, then another try ans wait 20minutes - still timed out and crashed the parser
- this was with the usgs parser only
- checked ilnlrs-geodashboard-dev - high cpu usage up to %198 long after parser crashed
- major problem - authentication of cache client timing out
GLM Parsing Time
GLM Zooplankton/Phytoplankton ingestion timing for Production server (141.142.211.239):
- Update Statistics: About an hour for all sensors. (api/sensors/update/).
- Binning by season
- Ran the following endpoint for all 3 parameters simultaneously:
- /api/cache/season/parameter-name
- Took about 9 hours for all to complete.
- Ran the following endpoint for all 3 parameters simultaneously:
- Number of Datapoints:
- Zooplankton-biomass: 3004
- Zooplankton-biovolume: 3004
- Phytoplankton-biovolume: 1930