SANCL 2012 - Home‎ > ‎Shared Task‎ > ‎

Results

Constituency Parsing Results:


Domain A (answers)Domain B (newsgroups)Domain C (reviews)Domain D (wsj)Average (A-C)
TeamLPLRF1POSLPLRF1POSLPLRF1POSLPLRF1POSLPLRF1POS
BerkeleyParser*75.8675.9875.9290.2077.8778.4278.1491.2477.6576.6877.1689.3388.3488.0888.2197.0877.1377.0377.0790.26
OHSU73.2174.6073.9090.1573.2275.4874.3391.1476.3176.2276.2790.0583.1783.7983.4896.8474.2575.4374.8390.45
Vanderbilt75.0976.7875.9391.7678.1079.0578.5792.9177.7478.1877.9691.9487.8288.0087.9197.4976.9878.0077.4992.20
IMS79.4678.1078.7890.2280.8580.1280.4891.0981.3178.6179.9489.9389.8388.9689.3997.3180.5478.9479.7390.41
Stanford78.7977.9178.3591.2181.4180.4980.9591.6281.9580.3281.1392.4590.0088.9389.4697.0180.7279.5780.1491.76
Alpage-180.6780.3680.5291.1784.2283.1283.6793.2282.0181.0481.5291.5890.2089.6289.9197.2082.3081.5181.9091.99
Alpage-280.7780.4380.6091.1484.7183.3684.0392.5882.2881.2481.7691.6390.1989.5689.8797.2282.5981.6882.1391.78
DCU-Paris13-280.0279.2279.6291.6183.1382.1882.6593.6082.9282.1282.5292.9688.4388.2988.3697.2982.0281.1781.6092.72
DCU-Paris13-182.9681.4382.1991.6385.0183.6584.3393.3984.7983.2984.0392.8990.7590.3290.5397.5384.2582.7983.5292.64


Dependency Parsing Results:
Domain A (answers)Domain B (newsgroups)Domain C (reviews)Domain D (wsj)Average (A-C)
TeamLASUASPOSLASUASPOSLASUASPOSLASUASPOSLASUASPOS
Zhang&Nivre*76.6081.5989.7481.6285.1991.1778.1083.3289.6089.3791.4696.8478.7783.3790.17
UPenn68.5482.2889.6574.4186.1090.9970.1782.8889.0281.7491.9996.9371.0483.7589.89
UMass72.5178.3689.4277.2381.6191.2874.8980.3489.9081.1583.9794.7174.8880.1090.20
NAIST73.5479.8989.9279.8384.5991.3975.7281.9990.4787.9590.9997.4076.3682.1690.59
IMS-274.4380.7789.5079.6384.2990.7276.5582.1889.4186.8889.9097.0276.8782.4189.88
IMS-375.9081.3088.2479.7783.9689.7077.6182.3888.1586.0288.8995.1477.7682.5588.70
IMS-178.3383.2091.0783.1686.8691.7079.0283.8290.0190.8292.7397.5780.1784.6390.93
Copenhagen78.1282.9190.4282.9086.5991.1579.5884.1389.8390.4792.4297.2580.2084.5490.47
Stanford-277.5082.5790.3083.5687.1891.4979.7084.3790.4689.8791.9595.0080.2584.7190.75
HIT-Baseline80.7585.8490.9985.2688.9092.3281.6086.6090.6591.8893.8897.7682.5487.1191.32
HIT-Domain80.7985.8690.9985.1888.8192.3281.9286.8090.6591.8293.8397.7682.6387.1691.32
Stanford-181.0185.7090.3085.8589.1091.4982.5486.7390.4691.5093.3895.0083.1387.1890.75
DCU-Paris1381.1585.8091.7985.3888.7493.8183.8688.3193.1189.6791.7997.2983.4687.6292.90


* Baseline models trained only on the Ontonotes WSJ training corpus. For constituents this is the publicly available BerkeleyParser (Petrov et. al ACL 2006). For dependencies this is a reimplementation of the transition-based parser of Zhang&Nivre ACL 2011 with the TnT (Brants ANLP 2000) part-of-speech tagger.

POS tag accuracies differ due to rounding errors and tiny discrepancies in evalb and eval.pl.
Comments