Who | Planned - Monday | Accomplished - Friday |
---|
| - GLM
- Begin looking at GLENDA data
Jira |
---|
server | JIRA |
---|
columns | key,summary,type,created,updated,due,assignee,reporter,priority,status,resolution |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | GLM-10 |
---|
|
- GLTG
- IMLCZO
- Add versioning to footer
Jira |
---|
server | JIRA |
---|
columns | key,summary,type,created,updated,due,assignee,reporter,priority,status,resolution |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | IMLCZO-126 |
---|
|
- Move flux tower parser to production
Jira |
---|
server | JIRA |
---|
columns | key,summary,type,created,updated,due,assignee,reporter,priority,status,resolution |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | IMLCZO-128 |
---|
|
- GEOD
- Work on React + Alt demo
Jira |
---|
server | JIRA |
---|
columns | key,summary,type,created,updated,due,assignee,reporter,priority,status,resolution |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | GEOD-767 |
---|
|
| |
| | |
| | |
| | | - Blue Waters workshop
- Brown Dog sprint 1 tasks
- CC* proposals (coming together)
- Revised DIBBs/DataNet position paper and sent back
- Distributed NDSC draft report to TAC
| - Blue Waters workshop
- Brown Dog sprint 1 tasks and followup
- CC* proposals (coming together)
- Revised DIBBs/DataNet position paper and sent back
- Distributed NDSC draft report to TAC
|
| - BD
- Continue working on DataWolf 3.1 issues
- NIST
- Other
- Vacation August 1 - 3, out early Friday August 5th at 2pm
| - BD
- Worked on 3.1 issue - WOLF-164 - allowing users to edit Java tools
- Other
- Caught up on email
- Vacation Aug 1 - 3
|
| | |
| - SEAD - bug fixes, SEAD-943 - Adding file thumbnails to "Edit Metadata" page in Staging Area In Progress
Jira |
---|
server | JIRA |
---|
columns | key,summary,type,created,updated,due,assignee,reporter,priority,status,resolution |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | SEAD-1075 |
---|
| Jira |
---|
server | JIRA |
---|
columns | key,summary,type,created,updated,due,assignee,reporter,priority,status,resolution |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | CATS-506 |
---|
|
- MDF - 2016-08-01+Kickoff
- MWRD - gate, DO, elevation scripts, populate database, e=mail Joe with missing data
| |
| | |
| | |
| | |
| | |
| - BD
- DEBOD
- Continue with debugging / improving cell segmentation
| - BD
- DEBOD
- Completed development on improving fine rotation of documents (a sub task of improving cell segmentation)
|
| - Assisting with new employee onboarding - Bing Zhang
- Brown Dog - 1 week sprint
- Brown Dog Annual Report
- MWRD proposal review ---- What is status?
- ISDA PPT
| - Onboarding
- Brown Dog sprint work and annual report
- GLTG Epic Prep - target May 2017
- TAMU team visited 8/2 and 8/3
- ISDA and TV PPT
- Ordered TShirts
|
| | |
| - GLTG
- GLM
- standardize time returns for get and post
- BD
| - GLTG
- tennessee data ingested
- greon aggregated file posting working again after refactoring storage on prod and rsync data with dev
- refactor greon parser to use new pyGeodashboard (bug with greon-06 data)
- GLM
- set up local environment
- make stream date formats consistent
- test datapoint data formats (appear ok)
- BD
- look at bd.m for adding bd-cli usage - lots to do still
| | | |
| SEAD Jira |
---|
server | JIRA |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | CATS-614 |
---|
|
GLM Jira |
---|
server | JIRA |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | GLM-52 |
---|
|
- New Search Page
CyberSEES | | | SEAD Jira |
---|
server | JIRA |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | CATS-614 |
---|
|
GLM Jira |
---|
server | JIRA |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | GLM-52 |
---|
|
- The geodashboard-search-react, the filters now use the same template instead of different ones. And it only shows up one filter on load and 3 more can be added with the + button
CyberSEES - Some work on trying to run the GIConverter on WSSI (still failing, but most of the configuration has been done)
|
| Work on Water Network Analysis - create usage pattern dataset out from epa net input file
| - Vacation
- Made water usage pattern dataset conversion for water network analysis
- Finished node and link file conversion for water network analysis
| | | |
| - Texas A&M Visitors (Tuesday)
- Metadata
- Water Network Damage Analysis
| - Texas A & M Metadata Presentation
- Water Network Damage Analysis
|
| | TERRA Jira |
---|
server | JIRA |
---|
serverId | b14d4ad9-eb00-3a94-88ac-a843fb6fa1ca |
---|
key | CATS-552 |
---|
|
- meetings w/ standards committee
- evaluate pipeline w/ new Clowder version
- extractor support
| - multi-criteria search initial implementation done
- extractor dev support
- rewrite aspects of Globus pipeline to be more generic for new project sites
- prepare for Danforth pipeline activation - call on Monday
- meetings
|
| | |
| | | - SEAD
- use and check the website
- MSC
| - SEAD
- use and check the website
- MSC
|
| - Package python code and change dependencies accordingly
- Performance testing on output format (binary or pure text file) and python library (pandas, pickle and build-in IO)
| - Finished one package packing, another one is waiting for upstream code to complete
- Finished performance testing on output format
|
Htut Khine Htay Win | | - My highlights of this week are
- I got my LSST project running on my computers in nebula.
- I got to write code to listen to the status of the machines and store them in Redis.
- My low for this week is
- I got stuck on sending messages with rabbitmq. Finally, it worked when I opened a bunch of security groups in creating instances of machines on the server. Still not 100% sure why those things matter. Thank you everyone for helping out.
|
| - New Employee Orientation
- Begin learning Brown Dog
| - installed the Polyglot and Clowder in development environments (linux), and installed and tested the converters and extractors.
- For the Clowder instance on the local machine, I run the dbpedia extractor and wordcount extractor.
- For the Polyglot, I am able to run the existing tools and the tool which I developed for the interview.
|