Who | Planned - Monday | Accomplished - Friday |
---|
| | |
| - Brown Dog - Clusterman Sprint!
- NDS
- 1.8 Deployments
- Brown Dog pd.py application
|
|
Chen Wang | - IN-Core
- Getting familiar with IN-Core
- Clowder
- Setting up meeting to figure out further implementation of SMM with Clowder
- SMM
| - In-Core
- Played with V1
- Added one little function to V2 fragility-service (POST /api/fragilities
- Clowder & SMM
- meeting
- logistics
- other small tasks assigned from SMM ppl
|
| - Cover Crop
- Code review
- Finalize what's needed for new demo video
- Ergo
- IN-Core
- Start next sprint
- Work on datawolf storage refactoring
- Add incore-dev build to Jenkins so we can deploy from dev branch
- Help with onboarding of new hire
- General - release datawolf 4.2
| |
| | |
| | |
| GLM - Run Glenda parser in dev
- Address bugs in detail page
In-Core - Refactoring building portfolio analyses
- Update the front end for v2 to match api refactoring
- Start looking into new analyses.
| GLM- Run Glenda parser in dev
- Address bugs in detail page
In-Core - Refactoring building portfolio analyses
- Update the front end for v2 to match api refactoring (some work)
- Start looking into new analyses.
SMU |
| - Add new logic to manipulate NA value for GeneSet_Characterization_Pipeline
- Create new docker image for Data_Cleanup_Pipeline
- Add new function to Signature_Analysis_Pipeline to create new output file which indicates the best match for each gene
- Search for solutions of user data/file backup for JupyterHub
| - Added new logic to manipulate NA value for GeneSet_Characterization_Pipeline
- Created new docker image for Data_Cleanup_Pipeline
- Add new function to Signature_Analysis_Pipeline to create new output file which indicates the best match for each gene(partially done)
- Searched for solutions of user data/file backup for JupyterHub, still in discussion
|
| | |
| | |
| - Clowder paper
- Meetings for potential collaborations
- Clowder refactoring
| - Clowder paper is going very slow. Made outline. Assigned contributors. Need to write.
- Meetings going well, just too many.
- Clowder refactoring happened in one meeting. Too little time for this. Problem.
|
| - gltg
- upgrade gltg server (scheduled wednesday)
- vbd
| - gltg
- vbd
- fortran bug fixes
- set up clowder instance
|
| - complete first draft of PEARC18 paper
- EOH prep finalization
- meantemp & canopycover traits processing starting Tuesday
- stability planning
| - PEARC 70% complete, plan to finish during Monday Nebula outage
- EOH prep
- BETYdb meantemp traits loaded
- ECSS allocation planning
|
| | - MDF
- read the docs account, make file improvements
- Farmdoc
- setup VM, installed DataWolf (4.1.0)
- Faculty Fellowship
- no meeting, EOH
- students vetting for REU
|
| - NDS
- Plan for Workbench Beta redeploy, e-mail beta users
- Fix/audit leftover deploy-tools bugs
 
- KnowEnG
- Continue trying to submit jobs to the Kubernetes API
- Crops in Silico
- Wire up the last few missing fields to produce real "models" YAML for running in cisrun
| - NDS
- Sent out maintenance e-mail to beta users
 
- KnowEnG
- Successfully submitting jobs, dependency job finishes successfully, but main job still failing
- Crops in Silico
- Wired up the last few missing fields (driver / args)
|
| - GLTG
- IMLCZO
- Priority:
 - Re-run Parser for Flux Tower
- Re-run Parser for Allerton non-Decagon
- If Time:
| - GLTG
- IMLCZO
- Re-ran Parser for Flux Tower
- Re-ran Parser for Allerton non-Decagon

|
| | |
| KnowEnG - Update Terms of Use
- Hubzero Portal UI update
Synjenta | KnowEnG- Update Terms of Use
- Hubzero Portal UI update
Synjenta |
| | |
| | |
| - New Hire
- HR Duties
- Reports
- Brown Dog Tasks
- EOH
- more to come
| |
| | - BD
- add choose tool for clusterman
- CC
- GLM
- add USGS to dev machine.
- add beartoken to geostreams-api
|
| - Work on up pycsw insert automation
- Create webdav metadata import python script
- Check out flooding FORTRAN code if it is delivered
- Create incore2-notebook vm
| - Worked on webdav data dump python script based on new API change
- Worked on webdav metadata dump python script
- Refactored data repository's GUID creation method to speed up the process
- Refactored table and shapefile join method to speed up the process
- Tested PyCSW keyword search
|