This page gives the performance of baseline methods on our dataset and implementation details of baseline methods.
We will give the website for each method. For accessing our source code, please fill the "access form" on Homepage. ๐
HMM ( https://github.com/luopeixiang/named_entity_recognition )
Default parameter settings in the source code.
CRF (https://github.com/luopeixiang/named_entity_recognition )
Default parameter settings in the source code.
BiLSTM ( https://github.com/luopeixiang/named_entity_recognition )
loss function: cross entropy
optimizer: Adam
batch size: 64
learning rate: 0.001
training epoch: 30
Default parameter settings in the source code.
4. BiLSTM-CRF ( https://github.com/luopeixiang/named_entity_recognition )
loss function: cross entropy
optimizer: Adam
batch size: 64
learning rate: 0.001
training epoch: 30
Default parameter settings in the source code.
5. BERT-BiLSTM-CRF ( https://github.com/macanv/BERT-BiLSTM-CRF-NER )
batch size: 64
learning rate: 1e-5
train epochs:10
dropout rate:0.5
gradient clip: 0.5
early stop strategy: stop_if_no_decrease_hook
LSTM size: 128
RNN layers: 1
max sequence length: 202
BERT model: chinese_L-12_H-768_A-12
6. Lattice-LSTM ( https://github.com/LeeSureman/Batch_Parallel_LatticeLSTM )
batch size:20
optimization: stochastic gradient descent (sgd)
learning rate: 0.045
dropout: 0.5
early stop strategy: stop_if_no_decrease_hook
eopchs:100
seed:100