...
The above algorithm assumes that a conversion to alpha and beta resulting in any information loss incurred from the conversion from A to B being undone is HIGHLY UNLIKELY (proof required).
Gliffy Diagram | ||||
---|---|---|---|---|
|
We implement this means of measuring information loss as follows:
- In PolyglotStewardAMQ create a new method convertWithLoss(...) that carries out the above alogrithm
- Modify Polyglot.java to add a function convertWithLoss(...) that calls convert by default
- Create a list of loadable formats by the comparison tools
- Call the DAP with a conversion request to alpha (make sure this request doesn't also attempt information loss estimation)
- Call the DAP with a conversion request to beta (make sure this request doesn't also attempt information loss estimation)
- Add comparison tools to DTS
- Call the DTS with an extraction request for alpha
- Call the DTS with an extraction request for beta
- Add helper methods descriptor_set_distance and descriptor_distance (porting from https://opensource.ncsa.illinois.edu/bitbucket/projects/BD/repos/bdcli/browse/bd.py)
- Use descriptor_set_distance to compare extracted JSON from alpha and beta, if a match is found mark edge as good in I/O-graph (e.g. 1 vs 0)
- Save as a record in mongo document in the form: Application, A, B, 0/1
- Add code and flag to PolyglotStewardAMQ.conf to load edge weights from mongo on Polyglot load
- Add end point that uses edge weights to determine best path