1. Data (/shared/ohsumed/data) 

#download Ohsumed data 

mkdir –p /shared/ohsumed 
cd /shared/ohsumed
wget http://trec.nist.gov/data/filtering/t9.filtering.tar.gz

#untar Ohsumed data 

tar xvzf t9.filtering.tar.gz --owner root --group root --no-same-owner 2>&1 >> ohsumeddata.log  

 #copy data into /shared/ohsumed/data 

cp /shared/ohsumed/ohsu-trec/trec9-test/ohsumed.88-91 /shared/ohsumed/data
cp /shared/ohsumed/ohsu-trec/trec9-train/ohsumed.87 /shared/ohsumed/data

#convert data into trec format 

cd ~/biocaddie/scripts   
./ohsumed2trec.sh 

***#documents=348566 

Output: /shared/ohsumed/data/trecText/ohsumed_all.txt 

Also make a copy at /data/ohsumed/data/ 
 

2. Indexes (/shared/ohsumed/indexes/ohsumed_all) 

Index param file: ~/biocaddie/index/build_index.ohsumed.params 

#Content 

<parameters> 
  <index>/shared/ohsumed/indexes/ohsumed_all</index> 
  <indexType>indri</indexType> 
  <corpus> 
    <path>/shared/ohsumed/data/trecText/ohsumed_all.txt</path> 
    <class>trectext</class> 
  </corpus> 
</parameters> 

#Build index 

mkdir -p /shared/ohsumed/indexes/  
cd ~/biocaddie  
IndriBuildIndex index/build_index.ohsumed.params  

Output is saved at /shared/ohsumed/indexes/ohsumed_all 

Also make a copy at /data/ohsumed/indexes/ohsumed_all 


3. Queries 

#copy topics to /shared/ohsumed/queries folder 

cp /shared/ohsumed/ohsu-trec/trec9-train/query.ohsu.1-63 /shared/ohsumed/queries 
cp /shared/ohsumed/ohsu-trec/pre-test/query.ohsu.test.1-43 /shared/ohsumed/queries 

There are two queries - for pre-test and for training (and test) sets

We create queries.combined.orig (total 106) including all the queries and queries.combined.short (total 63) for the queries used for training and test sets (not include pre-test queries)

#convert query into trec format (use ohsumedtopics2trec.sh to create queries.combined.orig and ohsumedtopics2trec_v2.sh to create queries.combined.short)

cd ~/biocaddie  
scripts/ohsumedtopics2trec.sh

Output is saved at /shared/ohsumed/queries

Also make a copy of the query at /data/ohsumed/queries 


4. Qrels

#copy qrels to /shared/ohsumed/qrels folder 

cp /shared/ohsumed/ohsu-trec/trec9-train/qrels.ohsu.batch.87 /shared/ohsumed/qrels 
cp /shared/ohsumed/ohsu-trec/pre-test/qrels.ohsu.test.87 /shared/ohsumed/qrels 
cp /shared/ohsumed/ohsu-trec/trec9-test/qrels.ohsu.88-91 /shared/ohsumed/qrels 

Similar to queries, the 3 qrels files include relevant judgements for pre-test, training and test sets.

In case of queries.combined.orig, all qrels files are used - qrels.all

In case of queries.combined.short, only qrels for training and test sets are used (qrels.ohsu.batch.87 and qrels.ohsu.88-91) - qrels.notest

However, the downloaded qrels are missing one column for trec_eval to process, we have to add the missing column before using.

#convert qrels into correct format for trec_eval (add in 0 in second column) 

cat qrels.ohsu.* | sed 's/\t/\t0\t/1' > qrels.all 

Output is saved at /shared/ohsumed/qrels

Also make a copy of the qrels at /data/ohsumed/qrels  
 

5. IndriRunQuery - Output

cd ~/biocaddie/baselines/ohsumed 
./<model>.sh <topic> <collection> |parallel -j 20 bash -c "{}" 

For orig queries:

./jm.sh orig combined| parallel -j 20 bash -c "{}" 
./dir.sh orig combined| parallel -j 20 bash -c "{}" 
./dir.sh orig combined| parallel -j 20 bash -c "{}" 
./two.sh orig combined| parallel -j 20 bash -c "{}" 
./okapi.sh orig combined| parallel -j 20 bash -c "{}" 
./rm3.sh orig combined| parallel -j 20 bash -c "{}" 

For short queries:

./jm.sh short combined| parallel -j 20 bash -c "{}" 
./dir.sh short combined| parallel -j 20 bash -c "{}" 
./dir.sh short combined| parallel -j 20 bash -c "{}" 
./two.sh short combined| parallel -j 20 bash -c "{}" 
./okapi.sh short combined| parallel -j 20 bash -c "{}" 
./rm3.sh short combined| parallel -j 20 bash -c "{}" 

IndriRunQuery outputs for different baselines are stored at: 

/data/ohsumed/output/tfidf/combined/orig
/data/ohsumed/output/dir/combined/orig
/data/ohsumed/output/okapi/combined/orig
/data/ohsumed/output/jm/combined/orig
/data/ohsumed/output/two/combined/orig
/data/ohsumed/output/rm3/combined/orig 
---
/data/ohsumed/output/tfidf/combined/short
/data/ohsumed/output/dir/combined/short
/data/ohsumed/output/okapi/combined/short
/data/ohsumed/output/jm/combined/short
/data/ohsumed/output/two/combined/short
/data/ohsumed/output/rm3/combined/short 

 

6. Cross-validation 

cd ~/biocaddie  

For orig queries which use qrels.all

scripts/mkeval_ohsumed.sh <model> <topics> <collection>
Eg: scripts/mkeval_ohsumed.sh tfidf orig combined 

For short queries which use qrels.notest

scripts/mkeval_ohsumed_v2.sh <model> <topics> <collection>
Eg: scripts/mkeval_ohsumed_v2.sh tfidf short combined 


7. Compare models

cd ~/biocaddie  
Rscript scripts/compare_ohsumed.R <collection> <from model> <to model> <topic> 

Results (compared to tfidf baseline) 

Using orig queries (pre-test queries included)

ModelMAPNDCGP@20NDCG@20P@100NDCG@100NotesDate
tfidf0.22040.45380.29950.29040.17350.3376Sweep b and k106/07/17
Okapi0.22180.45570.2819-0.30350.17170.3386Sweep b, k1, k306/07/17
QL (JM)0.1876-0.4212-0.2505-0.27730.1403-0.295-Sweep lambda06/07/17
QL (Dir)0.2032-0.4359-0.2713-0.29270.1633-0.3304

Sweep mu

06/07/17
QL (TS)0.2101-0.4415-0.2761-0.30290.1638-0.3277Sweep mu and lambda06/07/17
RM30.2618+0.45920.3277+0.29650.1913+0.3662+Sweep mu, fbDocs, fbTerms, and lambda06/08/17
root@integration-1:~/biocaddie# Rscript scripts/compare_ohsumed.R combined tfidf dir orig 
[1] "map 0.2204 0.2032 p= 0.9988" 
[1] "ndcg 0.4538 0.4359 p= 0.9987" 
[1] "P_20 0.2995 0.2713 p= 0.9985" 
[1] "ndcg_cut_20 0.2904 0.2927 p= 0.417" 
[1] "P_100 0.1735 0.1633 p= 0.9945" 
[1] "ndcg_cut_100 0.3376 0.3304 p= 0.7764" 
root@integration-1:~/biocaddie# Rscript scripts/compare_ohsumed.R combined tfidf jm orig 
[1] "map 0.2204 0.1876 p= 0.9966" 
[1] "ndcg 0.4538 0.4212 p= 0.9992" 
[1] "P_20 0.2995 0.2505 p= 0.9999" 
[1] "ndcg_cut_20 0.2904 0.2773 p= 0.8572" 
[1] "P_100 0.1735 0.1403 p= 1" 
[1] "ndcg_cut_100 0.3376 0.295 p= 0.9996" 
root@integration-1:~/biocaddie# Rscript scripts/compare_ohsumed.R combined tfidf two orig 
[1] "map 0.2204 0.2101 p= 0.972" 
[1] "ndcg 0.4538 0.4415 p= 0.9859" 
[1] "P_20 0.2995 0.2761 p= 0.9954" 
[1] "ndcg_cut_20 0.2904 0.3029 p= 0.1072" 
[1] "P_100 0.1735 0.1638 p= 0.9992" 
[1] "ndcg_cut_100 0.3376 0.3277 p= 0.857" 
root@integration-1:~/biocaddie# Rscript scripts/compare_ohsumed.R combined tfidf okapi orig 
[1] "map 0.2204 0.2218 p= 0.4445" 
[1] "ndcg 0.4538 0.4557 p= 0.414" 
[1] "P_20 0.2995 0.2819 p= 0.975" 
[1] "ndcg_cut_20 0.2904 0.3035 p= 0.1157" 
[1] "P_100 0.1735 0.1717 p= 0.6907" 
[1] "ndcg_cut_100 0.3376 0.3386 p= 0.4437" 

Using short queries (pre-test queries not included)

ModelMAPNDCGP@20NDCG@20P@100NDCG@100NotesDate
tfidf0.31880.60840.450.42550.26570.4625Sweep b and k106/07/17
Okapi0.31170.60440.44080.42770.2610.4569Sweep b, k1, k306/07/17
QL (JM)0.2545-0.5527-0.3908-0.3882-0.2135-0.3883-Sweep lambda06/07/17
QL (Dir)0.2924-0.5866-0.39750.4018-0.2492-0.432-

Sweep mu

06/07/17
QL (TS)0.2934-0.5828-0.4092-0.41220.2508-0.4385-Sweep mu and lambda06/07/17
RM30.3717+0.60870.5067+

0.4529 (p-value: 0.0541)

0.291+0.4934+Sweep mu, fbDocs, fbTerms, and lambda06/08/17
root@integration-1:~/biocaddie# Rscript scripts/compare_ohsumed.R combined tfidf dir short
[1] "map 0.3188 0.2924 p= 0.9997"
[1] "ndcg 0.6084 0.5866 p= 0.9994"
[1] "P_20 0.45 0.3975 p= 0.9998"
[1] "ndcg_cut_20 0.4255 0.4018 p= 0.9881"
[1] "P_100 0.2657 0.2492 p= 0.9947"
[1] "ndcg_cut_100 0.4625 0.432 p= 0.9999"
root@integration-1:~/biocaddie# Rscript scripts/compare_ohsumed.R combined tfidf jm short
[1] "map 0.3188 0.2545 p= 1"
[1] "ndcg 0.6084 0.5527 p= 1"
[1] "P_20 0.45 0.3908 p= 0.9984"
[1] "ndcg_cut_20 0.4255 0.3882 p= 0.9973"
[1] "P_100 0.2657 0.2135 p= 1"
[1] "ndcg_cut_100 0.4625 0.3883 p= 1"
root@integration-1:~/biocaddie# Rscript scripts/compare_ohsumed.R combined tfidf okapi short
[1] "map 0.3188 0.3117 p= 0.7974"
[1] "ndcg 0.6084 0.6044 p= 0.6834"
[1] "P_20 0.45 0.4408 p= 0.7506"
[1] "ndcg_cut_20 0.4255 0.4277 p= 0.4236"
[1] "P_100 0.2657 0.261 p= 0.791"
[1] "ndcg_cut_100 0.4625 0.4569 p= 0.747"
root@integration-1:~/biocaddie# Rscript scripts/compare_ohsumed.R combined tfidf two short
[1] "map 0.3188 0.2934 p= 1"
[1] "ndcg 0.6084 0.5828 p= 0.9997"
[1] "P_20 0.45 0.4092 p= 0.9989"
[1] "ndcg_cut_20 0.4255 0.4122 p= 0.89"
[1] "P_100 0.2657 0.2508 p= 0.9991"
[1] "ndcg_cut_100 0.4625 0.4385 p= 0.9992"

8. Comments:

BioCADDIE dataset contains descriptive metadata (structured and unstructured) of more than 1.5 millions documents from biomedical datasets. There are 20 queries which are manually refined and shortened including important keywords. Relevant judgements contains 3 categories 0-"not relevant", 1-"possibly relevant" and 2-"definitely relevant". 

TREC CDS dataset is a collection of 733.328 full-text biomedical literature of journal articles. 30 topics are provided, each includes topic "description" (containing a complete account of the patients' visits, including details such as their vital statistics, drug dosages, etc) and topic "summary" (a simplified versions of the narratives that contain less irrelevant information). Queries are constructed by topic summaries. Similar to bioCADDIE, relevant judgements are divided into 3 categories 0-"not relevant", 1-"possibly relevant" and 2-"definitely relevant". 

The OHSUMED test collection is a set of 348,566 references/documents from MEDLINE, the on-line medical information database, consisting of titles and/or abstracts from 270 medical journals. Compared to the two above collections, Ohsumed dataset is quite small. OHSUMED topics include 2 fields - tilte (patient description) and description (information request). Topic descriptions are selected to construct queries. Relevant judgements include 2 categories 1-"possibly relevant" and 2-"definitely relevant" 

Based on the characteristics of 3 collections, TREC CDS is far different from bioCADDIE and Ohsumed as it uses full text search and its queries are patient visit record summary instead of common information queries. Ohsumed collection is closer to bioCADDIE in term of dataset similarity (non full-text). However, bioCADDIE queries are short keyword queries while OHSUMED queries are short verbose queries. 

As per the baselines run results over all 3 collections, RM3 baselines generally perform well and consistent. Especially for TREC CDS and Ohsumed, RM3 gives best results for most of the metrics compared to other baselines. This was expected as RM3 based on Rocchio relevance feedback which can help to generate good query (query expansion) even we don’t know the collection well. 

One surprising result was that Query likelihood baselines with smoothing (such as JM, Dir and TS) did not improve the retrieval results over TFIDF for any metrics in TREC CDS and Ohsumed collections as bioCADDIE or previous studies did (http://trec.nist.gov/pubs/trec23/papers/pro-UCLA_MII_clinical.pdf). However, type of queries could be an important factor that might cause the differences in retrieval results. This was also mentioned in the study of Zhai C (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.58.8978) that queries with only keywords tend to perform better than more verbose queries (*** Note: Zhai also mentioned longer queries performed better than short queries)

We tried to examine the difference in using verbose queries and keyword queries on Ohsumed collection.

Using original queries (verbose queries) for OHSUMED

ModelMAPNDCGP@20NDCG@20P@100NDCG@100NotesDate
tfidf0.31880.60840.450.42550.26570.4625Sweep b and k106/07/17
QL (JM)0.2545-0.5527-0.3908-0.3882-0.2135-0.3883-Sweep lambda06/07/17
QL (Dir)0.2924-0.5866-0.39750.4018-0.2492-0.432-

Sweep mu

06/07/17
QL (TS)0.2934-0.5828-0.4092-0.41220.2508-0.4385-Sweep mu and lambda06/07/17

Using manually refined queries (mostly keywords) for OHSUMED

ModelMAPNDCGP@20NDCG@20P@100NDCG@100NotesDate
tfidf0.3150.59490.41980.38020.26140.4454Sweep b and k106/09/17
QL (JM)0.2587-0.5466-0.3817-0.36080.2257-0.3806-Sweep lambda06/09/17
QL (Dir)0.3027-0.58830.40870.3790.2610.4333-

Sweep mu

06/09/17
QL (TS)0.3052-0.58710.41590.38960.26270.4354-Sweep mu and lambda06/09/17

We can see that when using keyword queries, the difference in retrieval results between tfidf and QL is smaller.

Specifically, tfidf performed worse for all metrics when using keyword queries than using verbose/original queries. QL (JM) also performed worse for NDCG, P@20, NDCG@20 and NDCG@100. However, QL (Dir) and QL (TS) performed better for most of the metrics. This matched with the finding in Zhai's study that JM works worst for short keywords queries but more effective when queries are verbose while Dir works better for concise keyword queries then verbose queries.

Also number of queries used for running baselines in each collection could be considered for the difference.

Below is the cross validation results for map metric of tfidf, dir, jm and two baselines for all queries in Ohsumed collection (there are only 97 queries listed instead of 106 because 7 other queries contain invalid punctuations for trec_eval to process)

Compared to tfidf, dir has 26/97 queries with higher map values; jm has 19/97 queries with higher map values; and two stage has 27/97 queries with higher map values.

Hence, if we use a smaller number of queries (such as less than 25 queries), there are possibilities that QL might yield better results than tfidf.

 tfidfdirjmtwo
OHSU-TEST1k1=0.7:b=0.5.eval0500.eval0lambda=0.7.eval0mu=250:lambda=0.6.eval0
OHSU-TEST11k1=0.7:b=0.5.eval0.0013500.eval0.005lambda=0.7.eval0.0059mu=250:lambda=0.6.eval0.004
OHSU-TEST12k1=0.7:b=0.5.eval0.0627500.eval0.0138lambda=0.7.eval0.0084mu=250:lambda=0.6.eval0.0171
OHSU-TEST14k1=0.7:b=0.5.eval0.0425500.eval0.0315lambda=0.7.eval0.047mu=250:lambda=0.6.eval0.0566
OHSU-TEST15k1=0.7:b=0.5.eval0500.eval0lambda=0.7.eval0mu=250:lambda=0.6.eval0
OHSU-TEST16k1=0.7:b=0.5.eval0500.eval0lambda=0.7.eval0mu=250:lambda=0.6.eval0
OHSU-TEST17k1=0.7:b=0.4.eval0.0996500.eval0.0727lambda=0.7.eval0.0991mu=250:lambda=0.6.eval0.0857
OHSU-TEST19k1=0.7:b=0.5.eval0.0273500.eval0.0226lambda=0.7.eval0.0174mu=250:lambda=0.6.eval0.0229
OHSU-TEST2k1=0.7:b=0.5.eval0500.eval0lambda=0.7.eval0mu=250:lambda=0.6.eval0
OHSU-TEST20k1=0.7:b=0.5.eval0.0445500.eval0.0478lambda=0.7.eval0.0529mu=250:lambda=0.6.eval0.0493
OHSU-TEST21k1=0.7:b=0.5.eval0.0176500.eval0.0128lambda=0.7.eval0.0057mu=250:lambda=0.6.eval0.0107
OHSU-TEST22k1=0.7:b=0.5.eval0.0244500.eval0.0213lambda=0.7.eval0.0278mu=250:lambda=0.6.eval0.0244
OHSU-TEST23k1=0.7:b=0.5.eval0.0611500.eval0.0383lambda=0.7.eval0.0329mu=250:lambda=0.6.eval0.0382
OHSU-TEST25k1=0.7:b=0.5.eval0.1011500.eval0.1199lambda=0.7.eval0.0832mu=250:lambda=0.6.eval0.1104
OHSU-TEST26k1=0.7:b=0.5.eval0.018500.eval0.0165lambda=0.7.eval0.0161mu=250:lambda=0.6.eval0.0155
OHSU-TEST29k1=0.7:b=0.5.eval0.25500.eval0.25lambda=0.7.eval0.2511mu=250:lambda=0.6.eval0.25
OHSU-TEST30k1=0.7:b=0.5.eval0.16671000.eval0.3333lambda=0.7.eval1mu=250:lambda=0.7.eval0.5
OHSU-TEST31k1=0.7:b=0.5.eval0.0131500.eval0.0109lambda=0.7.eval0.011mu=250:lambda=0.6.eval0.0111
OHSU-TEST32k1=0.7:b=0.5.eval0.021500.eval0.014lambda=0.7.eval0.0098mu=250:lambda=0.6.eval0.0132
OHSU-TEST33k1=0.7:b=0.5.eval0.0908500.eval0.0425lambda=0.7.eval0.0195mu=250:lambda=0.6.eval0.0782
OHSU-TEST34k1=0.7:b=0.5.eval0.1071500.eval0.0929lambda=0.7.eval0.1626mu=250:lambda=0.6.eval0.1174
OHSU-TEST35k1=0.7:b=0.5.eval0.0068500.eval0.0024lambda=0.7.eval0.0019mu=250:lambda=0.6.eval0.0029
OHSU-TEST37k1=0.7:b=0.5.eval0.0773500.eval0.0597lambda=0.7.eval0.0559mu=250:lambda=0.6.eval0.0694
OHSU-TEST38k1=0.7:b=0.5.eval0.0439500.eval0.031lambda=0.7.eval0.0337mu=250:lambda=0.6.eval0.0407
OHSU-TEST39k1=0.7:b=0.5.eval0500.eval0lambda=0.7.eval0mu=250:lambda=0.6.eval0
OHSU-TEST4k1=0.7:b=0.5.eval0.2500.eval0.1667lambda=0.7.eval0.1667mu=250:lambda=0.6.eval0.1667
OHSU-TEST40k1=0.7:b=0.5.eval0.0013500.eval0.002lambda=0.7.eval0.0016mu=250:lambda=0.6.eval0.0021
OHSU-TEST41k1=0.7:b=0.5.eval0.0023500.eval0.0036lambda=0.7.eval0.0048mu=250:lambda=0.6.eval0.0034
OHSU-TEST42k1=0.7:b=0.5.eval0.0074500.eval0.0067lambda=0.7.eval0.0064mu=250:lambda=0.6.eval0.0067
OHSU-TEST43k1=0.7:b=0.5.eval7.00E-04500.eval6.00E-04lambda=0.7.eval0mu=250:lambda=0.6.eval7.00E-04
OHSU-TEST5k1=0.7:b=0.5.eval0.0982500.eval0.0913lambda=0.7.eval0.0643mu=250:lambda=0.6.eval0.0868
OHSU-TEST7k1=0.7:b=0.5.eval0500.eval0lambda=0.7.eval0mu=250:lambda=0.6.eval0
OHSU-TEST8k1=0.7:b=0.5.eval0.0175500.eval0.0203lambda=0.7.eval0.0089mu=250:lambda=0.6.eval0.021
OHSU-TEST9k1=0.7:b=0.5.eval0500.eval0lambda=0.7.eval0mu=250:lambda=0.6.eval0
OHSU1k1=0.7:b=0.5.eval0.1978500.eval0.2136lambda=0.7.eval0.1919mu=250:lambda=0.6.eval0.2157
OHSU10k1=0.7:b=0.5.eval0.1683500.eval0.138lambda=0.7.eval0.1184mu=250:lambda=0.6.eval0.1371
OHSU11k1=0.7:b=0.5.eval0.2714500.eval0.2335lambda=0.7.eval0.2166mu=250:lambda=0.6.eval0.2162
OHSU12k1=0.7:b=0.5.eval0.5681500.eval0.5692lambda=0.7.eval0.6mu=250:lambda=0.6.eval0.5797
OHSU13k1=0.7:b=0.5.eval0.3131500.eval0.2962lambda=0.7.eval0.2169mu=250:lambda=0.6.eval0.2643
OHSU14k1=0.7:b=0.5.eval0.3825500.eval0.3546lambda=0.7.eval0.2714mu=250:lambda=0.6.eval0.3519
OHSU15k1=0.7:b=0.5.eval0.1553500.eval0.2467lambda=0.7.eval0.2296mu=250:lambda=0.6.eval0.2442
OHSU16k1=0.7:b=0.5.eval0.1987500.eval0.1674lambda=0.7.eval0.1554mu=250:lambda=0.6.eval0.1757
OHSU17k1=0.7:b=0.5.eval0.436500.eval0.4677lambda=0.7.eval0.2697mu=250:lambda=0.6.eval0.4261
OHSU18k1=0.7:b=0.5.eval0.3092500.eval0.3346lambda=0.7.eval0.2672mu=250:lambda=0.6.eval0.3239
OHSU19k1=0.7:b=0.5.eval0.7802500.eval0.7808lambda=0.7.eval0.7756mu=250:lambda=0.6.eval0.7796
OHSU2k1=0.7:b=0.5.eval0.38500.eval0.4186lambda=0.7.eval0.2564mu=250:lambda=0.6.eval0.4064
OHSU20k1=0.7:b=0.5.eval0.607500.eval0.6242lambda=0.7.eval0.6596mu=250:lambda=0.6.eval0.6292
OHSU21k1=0.7:b=0.5.eval0.4549500.eval0.4568lambda=0.7.eval0.3549mu=250:lambda=0.6.eval0.4636
OHSU22k1=0.7:b=0.5.eval0.1981500.eval0.1747lambda=0.7.eval0.16mu=250:lambda=0.6.eval0.1852
OHSU23k1=0.7:b=0.5.eval0.5543500.eval0.4215lambda=0.7.eval0.4487mu=250:lambda=0.6.eval0.4242
OHSU24k1=0.7:b=0.5.eval0.606500.eval0.4991lambda=0.7.eval0.3824mu=250:lambda=0.6.eval0.5521
OHSU25k1=0.7:b=0.5.eval0.0466500.eval0.0703lambda=0.7.eval0.0569mu=250:lambda=0.6.eval0.0711
OHSU26k1=0.7:b=0.5.eval0.8438500.eval0.829lambda=0.7.eval0.8408mu=250:lambda=0.6.eval0.8305
OHSU27k1=0.7:b=0.5.eval0.6065500.eval0.6296lambda=0.7.eval0.5555mu=250:lambda=0.6.eval0.6381
OHSU28k1=0.7:b=0.5.eval0.0867500.eval0.0803lambda=0.7.eval0.0681mu=250:lambda=0.6.eval0.0844
OHSU29k1=0.7:b=0.5.eval0.4462500.eval0.4381lambda=0.7.eval0.4798mu=250:lambda=0.6.eval0.4447
OHSU3k1=0.7:b=0.5.eval0.5177500.eval0.4651lambda=0.7.eval0.4347mu=250:lambda=0.6.eval0.4797
OHSU30k1=0.7:b=0.5.eval0.5813500.eval0.5806lambda=0.7.eval0.4513mu=250:lambda=0.6.eval0.5674
OHSU31k1=0.7:b=0.5.eval0.1089500.eval0.0968lambda=0.7.eval0.1081mu=250:lambda=0.6.eval0.1002
OHSU32k1=0.7:b=0.5.eval0.1039500.eval0.076lambda=0.7.eval0.0598mu=250:lambda=0.6.eval0.083
OHSU33k1=0.7:b=0.5.eval0.2543500.eval0.151lambda=0.7.eval0.1089mu=250:lambda=0.6.eval0.1654
OHSU34k1=0.7:b=0.5.eval0.0517500.eval0.0436lambda=0.7.eval0.0431mu=250:lambda=0.6.eval0.0439
OHSU35k1=0.7:b=0.5.eval0.8056500.eval0.5475lambda=0.7.eval0.4332mu=250:lambda=0.6.eval0.7356
OHSU36k1=0.7:b=0.5.eval0.2005500.eval0.164lambda=0.7.eval0.1623mu=250:lambda=0.6.eval0.1589
OHSU37k1=0.7:b=0.5.eval0.2567500.eval0.0755lambda=0.7.eval0.0423mu=250:lambda=0.6.eval0.1038
OHSU38k1=0.7:b=0.5.eval0.6375500.eval0.5826lambda=0.7.eval0.5679mu=250:lambda=0.6.eval0.603
OHSU39k1=0.7:b=0.5.eval0.125500.eval0.0691lambda=0.7.eval0.0457mu=250:lambda=0.6.eval0.0655
OHSU4k1=0.7:b=0.5.eval0.6214500.eval0.5542lambda=0.7.eval0.5198mu=250:lambda=0.6.eval0.5606
OHSU40k1=0.7:b=0.5.eval0.4596500.eval0.3519lambda=0.7.eval0.189mu=250:lambda=0.6.eval0.41
OHSU41k1=0.7:b=0.5.eval0.3057500.eval0.2482lambda=0.7.eval0.2234mu=250:lambda=0.6.eval0.2549
OHSU42k1=0.7:b=0.5.eval0.1898500.eval0.1836lambda=0.7.eval0.1861mu=250:lambda=0.6.eval0.1892
OHSU43k1=0.7:b=0.5.eval0.7551500.eval0.7262lambda=0.7.eval0.6398mu=250:lambda=0.6.eval0.737
OHSU44k1=0.7:b=0.5.eval0.4242500.eval0.359lambda=0.7.eval0.3526mu=250:lambda=0.6.eval0.3724
OHSU45k1=0.7:b=0.5.eval0.0676500.eval0.034lambda=0.7.eval0.0107mu=250:lambda=0.6.eval0.0596
OHSU46k1=0.7:b=0.5.eval0.2438500.eval0.2487lambda=0.7.eval0.1712mu=250:lambda=0.6.eval0.2513
OHSU47k1=0.7:b=0.5.eval0.3565500.eval0.3483lambda=0.7.eval0.3376mu=250:lambda=0.6.eval0.3644
OHSU48k1=0.7:b=0.5.eval0.2768500.eval0.4356lambda=0.7.eval0.3877mu=250:lambda=0.6.eval0.387
OHSU49k1=0.7:b=0.5.eval0.2734500.eval0.2411lambda=0.8.eval0.1215mu=250:lambda=0.6.eval0.2288
OHSU5k1=0.7:b=0.5.eval0.6262500.eval0.5909lambda=0.7.eval0.5647mu=250:lambda=0.6.eval0.5992
OHSU50k1=0.7:b=0.5.eval0.3306500.eval0.2609lambda=0.7.eval0.2804mu=250:lambda=0.6.eval0.2603
OHSU51k1=0.7:b=0.5.eval0.3498500.eval0.3026lambda=0.8.eval0.1902mu=250:lambda=0.6.eval0.263
OHSU52k1=0.7:b=0.5.eval0.126500.eval0.1618lambda=0.7.eval0.1068mu=250:lambda=0.6.eval0.1503
OHSU53k1=0.7:b=0.5.eval0.0958500.eval0.0967lambda=0.7.eval0.0123mu=250:lambda=0.6.eval0.0768
OHSU54k1=0.7:b=0.5.eval0.3455500.eval0.2924lambda=0.7.eval0.3081mu=250:lambda=0.6.eval0.3254
OHSU55k1=0.7:b=0.5.eval0.0947500.eval0.1603lambda=0.7.eval0.1134mu=250:lambda=0.6.eval0.1614
OHSU56k1=0.7:b=0.5.eval0.1287500.eval0.1358lambda=0.7.eval0.1144mu=250:lambda=0.6.eval0.1004
OHSU57k1=0.7:b=0.5.eval0.3136500.eval0.3096lambda=0.7.eval0.3194mu=250:lambda=0.6.eval0.3329
OHSU58k1=0.7:b=0.5.eval0500.eval0lambda=0.7.eval0mu=250:lambda=0.6.eval0
OHSU59k1=0.7:b=0.5.eval0.3361500.eval0.3125lambda=0.7.eval0.3089mu=250:lambda=0.6.eval0.3126
OHSU6k1=0.7:b=0.5.eval0.1673500.eval0.1533lambda=0.7.eval0.1334mu=250:lambda=0.6.eval0.1774
OHSU60k1=0.7:b=0.5.eval0.1021500.eval0.013lambda=0.7.eval0.009mu=250:lambda=0.6.eval0.0199
OHSU61k1=0.7:b=0.5.eval0.0313500.eval0.0612lambda=0.7.eval0.0675mu=250:lambda=0.6.eval0.0546
OHSU62k1=0.7:b=0.5.eval0.1278500.eval0.0815lambda=0.7.eval0.1016mu=250:lambda=0.6.eval0.0958
OHSU63k1=0.7:b=0.5.eval7.00E-04500.eval0.0018lambda=0.7.eval0.0011mu=250:lambda=0.6.eval0.0011
OHSU7k1=0.7:b=0.5.eval0.2013500.eval0.1095lambda=0.7.eval0.1296mu=250:lambda=0.6.eval0.1375
OHSU8k1=0.7:b=0.5.eval0.0915500.eval0.0683lambda=0.7.eval0.0745mu=250:lambda=0.6.eval0.0687
OHSU9k1=0.7:b=0.5.eval0.5243500.eval0.4755lambda=0.7.eval0.4899mu=250:lambda=0.6.eval0.4789



  • No labels