基于boundary smoothing的中文NER

基于boundary smoothing的中文NER

论文解读

论文《Boundary Smoothing for Named Entity Recognition》

GitHub:GitHub - syuoni/eznlp: Easy Natural Language Processing

解读略

环境安装

packaging包要根据requirements要求安装指定版本:packaging==20.4 ,否则运行程序会报错,这个包最好在所有包安装完成之后再更新至这个版本,因为这个包版本比较老,可能会影响其他包的安装(安装不上)。如果这个包版本是20.4的情况下去安装其他包,可能会报类似下面的错误。
TypeError: canonicalize_version() got an unexpected keyword argument 'strip_trailing_zero'

numpy==1.18.5死活安装不上,ModuleNotFoundError: No module named 'distutils.msvccompiler';安装1.19.5代替
pandas=1.0.5死活安装不上,错误同上;安装1.2.0代替

安装flair=0.8还是安装不上,需要C++环境,安装build Tools工具选择安装C++相关的组件之后就可以成功安装了

spacy==2.3.2安装有问题,会报下面两种问题(调整setuptools和wheel包的版本后分别遇到的)

  1. python setup.py bdist_wheel did not run successfully.
  2. ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (blis)
    未能解决,安装2.3.4版本代替

代码剖析

load_data()

返回数据的格式为:
train、test和dev均为列表,每个列表的元素内容如下:
{'tokens': [1, 9, 5, 6, 年, 5, 月, 2, 9, 日, 出, 生, ,, 中, 共, 党, 员, ,, 大, 学, 文, 化, ,, 高, 级, 工, 程, 师, ,, 历, 任, 重, 庆, 万, 里, 蓄, 电, 池, 股, 份, 有, 限, 公, 司, 技, 改, 办, 主, 任, 、, 研, 究, 所, 所, 长, 、, 总, 工, 程, 师, 、, 董, 事, 。], 'chunks': [('TITLE', 13, 17), ('EDU', 18, 22), ('TITLE', 23, 28), ('ORG', 31, 44), ('TITLE', 44, 49), ('TITLE', 50, 55), ('TITLE', 56, 60), ('TITLE', 61, 63)], 'doc_idx': '0'}

build_ER_config()

主要是初始化一些配置,又分为两个主要的方法
BoundarySelectionDecoderConfig

  • EncoderConfig:主要配置一个Encoder类型,有FFN、LSTM、Conv、transformer等,默认是FFN,即FFNEncoder–>FeedForwardBlock(线性层+激活函数+dropout)

ExtractorConfig

  • collect_IE_assembly_config
    • load_vectors
    • OneHotConfig(单个字的集合,build_vocab建立单个字的词表,exemplify根据句子返回句子在词表中的id,另外还构建了一个embedding层)
    • MultiHotConfig(默认没有使用)
    • SoftLexiconConfig(默认没有使用)
    • load_pretrained(使用bert的话,加载预训练的模型结构和tokenizer
    • BertLikeConfig(instantiate方法是BertLikeEmbedder,)
    • SpanBertLikeConfig(默认没有使用)
  • build_vocabs_and_dims,建立词表和标签集,通过后面的Dataset来调用

EncoderConfig(这里是一个LSTM网络,RNNEncoder)(暂时未发现有使用)

load_pretrained:

1
bert_like, tokenizer = load_pretrained(args.bert_arch, args, cased=True)

bert_like是BERT模型结构,Base Model分为

  • BertEmbeddings
  • BertEncoder
  • BertPooler

接下来bert_like作为BertLikeConfig的一个参数进行了初始化,然后又得到一个实例依然叫做bert_like,所以最终bert_like里面有一个属性也叫bert_like,但是他俩本质不一样;

ExtractorConfig内部主要分为两个部分,一个是bert_like(BertLikeConfig的实例),另一个就是decoder(BoundarySelectionDecoderConfig的实例)。

build_ER_config()最终返回的是一个ExtractorConfig的实例config,通过config.instantiate()来生成model。

ExtractorConfig的instantiate()方法返回的是另一个类Extractor实例

1
2
3
4
def instantiate(self):
# Only check validity at the most outside level
assert self.valid
return Extractor(self)

Extractor

1
2
3
class Extractor(ModelBase):
def __init__(self, config: ExtractorConfig):
super().__init__(config)

这里是在继承ModelBase时,初始化调用每一个模块的instantiate()

1
2
3
4
5
6
class ModelBase(torch.nn.Module):
def __init__(self, config: ModelConfigBase):
super().__init__()
for name in config._all_names:
if (c := getattr(config, name)) is not None:
setattr(self, name, c.instantiate())

所以会调用BertLikeConfig的instantiate

1
2
def instantiate(self):
return BertLikeEmbedder(self)

process_IE_data

默认配置进去啥也没做

Dataset
分别生成train_set、dev_set和test_set

在模型开始训练时,通过逐步调试代码来看一下是如何前向传播和计算损失的:
trainer.py

1
2
with torch.cuda.amp.autocast(enabled=self.use_amp):
loss_with_possible_y_pred = self.forward_batch(batch)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
def forward_batch(self, batch: Batch):
"""
Forward to the loss (scalar).
Optionally return the predicted labels of the batch for evaluation.

Returns
-------
A scalar Tensor of loss, or
A Tuple of (loss, y_pred_1, y_pred_2, ...)
"""
losses, states = self.model(batch, return_states=True)
loss = losses.mean()

if self.num_metrics == 0:
return loss
else:
return loss, *self.model.decoder._unsqueezed_decode(batch, **states)

这里的batch是每个批次返回的数据,包括以下数据:

  • tokenized_text:这一批次的文本数据,list类型,大小(batch_size, seq_len)
  • seq_lens:这一批次每个句子的长度,tensor类型,size为(batch_size, )
  • mask:遮蔽,貌似是反的
  • ohots:dict类型,key=text,value是一个list类型,大小(batch_size, seq_len)
  • bert_like:dict类型,有三个key,分别为
    • sub_tok_ids
    • sub_mask
    • ori_indexes
  • boundaries_objs:list类型,每个元素都是eznlp/model/decoder/boundaries.py中Boundaries类的实例,这里是基于span的ner任务的核心思想,也是标签平滑这篇论文的核心改进点。

batch的内容主要是通过Dataset类的getitem方法进行加载

1
2
3
4
5
6
7
8
def __getitem__(self, i):
entry = self._get_entry(i)
example = {}
if 'tokens' in self.data[0]:
example['tokenized_text'] = entry['tokens'].text

example.update(self.config.exemplify(entry, training=self.training))
return example

这里又继续调用了self.config的exemplify方法,这里的self.config是ExtractorConfig实例,所以调用这个类的exemplify方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
def exemplify(self, data_entry: dict, training: bool=True):
example = {}

if self.ohots is not None:
example['ohots'] = {f: c.exemplify(data_entry['tokens']) for f, c in self.ohots.items()}

if self.mhots is not None:
example['mhots'] = {f: c.exemplify(data_entry['tokens']) for f, c in self.mhots.items()}

if self.nested_ohots is not None:
example['nested_ohots'] = {f: c.exemplify(data_entry['tokens']) for f, c in self.nested_ohots.items()}

for name in self._pretrained_names:
if getattr(self, name) is not None:
example[name] = getattr(self, name).exemplify(data_entry['tokens'])

example.update(self.decoder.exemplify(data_entry, training=training))
return example

这里分别调用几个子模块的exemplify方法,ohots、_pretrained_names(例如bert_like)、self.decoder(这里是BoundarySelectionDecoderConfig的实例),BoundarySelectionDecoderConfig继承父类BoundariesDecoderMixin,在其父类中有exemplify方法

1
2
def exemplify(self, data_entry: dict, training: bool=True):
return {'boundaries_obj': Boundaries(data_entry, self, training=training)}

即通过Boundaries类的实例化生成boundaries_obj,下面先重点剖析一下boundaries_objs
以下面的样本举例:

1
{'chunks': [('ORG', 2, 16), ('TITLE', 16, 19), ('TITLE', 20, 25)], 'doc_idx': '0', 'tokens': [曾, 任, 深, 圳, 市, 建, 筑, 机, 械, 动, 力, 公, 司, 分, 公, 司, 副, 经, 理, 、, 主, 任, 工, 程, 师, ;]}

按照正常的基于span的NER任务思想,生成所有的span(也就是跨度),判断每一个跨度是否为某种实体类型,以上面的句子长度为26为例,有多种跨度呢?
对于位置0,有26种跨度,对于位置1,有25种跨度,…,对于位置25,有1种跨度,即对于位置i,有n-i种跨度(n为句子长度),但是为了方便用张量表示,张量的形状为(n, n),第0维是表示这个位置往后面的跨度,第1维表示跨到哪个位置。另外,还需要表示这个跨度的实体类型,普通的基于span的方法,就是第1维跨到的位置处,将这个位置的值设置为实体的类型id

实体类型例如:
{‘‘: 0, ‘CONT’: 2, ‘EDU’: 5, ‘LOC’: 8, ‘NAME’: 1, ‘ORG’: 6, ‘PRO’: 7, ‘RACE’: 3, ‘TITLE’: 4},共9种,label_nums=9。在没有标签平滑的情况下,张量根据实际的实体位置和类型按照下述方式构建:

1
2
3
4
5
if config.sb_epsilon <= 0 and config.sl_epsilon <= 0:
# Cross entropy loss for non-smoothing
self.label_ids = torch.full((self.num_tokens, self.num_tokens), config.none_idx, dtype=torch.long)
for label, start, end in self.chunks:
self.label_ids[start, end-1] = config.label2idx[label]

而为了进行标签平滑,即在跨度的位置让他有一个概率表示的含义,所以又增加一维,这一维度的大小为标签的总数(假设用label_nums表示),那么最终的张量size应该为(n, n, label_nums)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
for label, start, end in self.chunks:
label_id = config.label2idx[label]
# 这个跨度位置size为label_nums,在对应的实体类型id处设置一个置信度=1-sb_epsilon
# sb_epsilon是可以设置的一个超参数,论文介绍一般可以设置0.1、0.2、0.3
# 因为上面('ORG', 2, 16),16是开区间,所以这里end-1(包括后面的sur_end、end都减1)
self.label_ids[start, end-1, label_id] += (1 - config.sb_epsilon)

# sb_size是另一个超参数,将真实标签的边界周围平滑几个跨度,可以设置1、2、3
for dist in range(1, config.sb_size+1):
# 给真实边界周围的边界多少的置信度,当sb_size=1时,置信度是0.025
eps_per_span = config.sb_epsilon / (config.sb_size * dist * 4)
# 针对('ORG', 2, 16),得到四个平滑跨度[(2, 15), (1, 16), (2, 17), (3, 16)]
sur_spans = list(_spans_from_surrounding((start, end), dist, self.num_tokens))
# 分别这四个跨度增加一个小的置信度
for sur_start, sur_end in sur_spans:
self.label_ids[sur_start, sur_end-1, label_id] += (eps_per_span*config.sb_adj_factor)
# Absorb the probabilities assigned to illegal positions
self.label_ids[start, end-1, label_id] += eps_per_span * (dist * 4 - len(sur_spans))

总结一下,当sb_size=1时,针对每个真实的实体跨度,会额外增加四个平滑后的跨度,这五个跨度的置信度加和是1;当sb_size=2时,会生成四个平滑步长为1的跨度和8个平滑步长为2的跨度,最终这些跨度的置信度加和依然是1。
最后统一修改0号即非标签的位置的值(虽然不太确定这一步骤的目的)

1
self.label_ids[:, :, config.none_idx] = 1 - self.label_ids.sum(dim=-1)

这里大概搞清楚batch的数据内容之后,继续回到forward-前向传播的部分,self.model是Extractor实例,Extractor继承了ModelBase,ModelBase的forward方法:

1
2
3
4
5
6
7
8
9
def forward(self, batch: Batch, return_states: bool=False):
states = self.forward2states(batch)
losses = self.decoder(batch, **states)

# Return `states` for the `decode` method, to avoid duplicated computation.
if return_states:
return losses, states
else:
return losses

Extractor自行实现的forward2states()方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
def _get_full_embedded(self, batch: Batch):
embedded = []

if hasattr(self, 'ohots'):
ohots_embedded = [self.ohots[f](batch.ohots[f]) for f in self.ohots]
embedded.extend(ohots_embedded)

if hasattr(self, 'mhots'):
mhots_embedded = [self.mhots[f](batch.mhots[f]) for f in self.mhots]
embedded.extend(mhots_embedded)

if hasattr(self, 'nested_ohots'):
nested_ohots_embedded = [self.nested_ohots[f](**batch.nested_ohots[f], seq_lens=batch.seq_lens) for f in self.nested_ohots]
embedded.extend(nested_ohots_embedded)

return torch.cat(embedded, dim=-1)


def _get_full_hidden(self, batch: Batch):
full_hidden = []

if any([hasattr(self, name) for name in ExtractorConfig._embedder_names]):
# 这里过ohots层的词向量编码,得到embedded:(batch_size, seq_len, embed_dim)
embedded = self._get_full_embedded(batch)
if hasattr(self, 'intermediate1'):
full_hidden.append(self.intermediate1(embedded, batch.mask))
else:
full_hidden.append(embedded)

# 这里再过一层Bert,得到:(batch_size, seq_len, 768)
for name in ExtractorConfig._pretrained_names:
if hasattr(self, name):
full_hidden.append(getattr(self, name)(**getattr(batch, name)))

# 将两处得到的embed进行拼接得到:(batch_size, seq_len, 768+embed_dim)
full_hidden = torch.cat(full_hidden, dim=-1)

if hasattr(self, 'intermediate2'):
return self.intermediate2(full_hidden, batch.mask)
else:
return full_hidden

def forward2states(self, batch: Batch):
return {'full_hidden': self._get_full_hidden(batch)}

接下来self.decoder是BoundarySelectionDecoder的实例,其forward方法:

1
2
3
4
5
6
7
8
9
10
def forward(self, batch: Batch, full_hidden: torch.Tensor):
batch_scores = self.compute_scores(batch, full_hidden)

losses = []
for curr_scores, boundaries_obj, curr_len in zip(batch_scores, batch.boundaries_objs, batch.seq_lens.cpu().tolist()):
curr_non_mask = getattr(boundaries_obj, 'non_mask', self._get_span_non_mask(curr_len))

loss = self.criterion(curr_scores[:curr_len, :curr_len][curr_non_mask], boundaries_obj.label_ids[curr_non_mask])
losses.append(loss)
return torch.stack(losses)

compute_scores

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
def compute_scores(self, batch: Batch, full_hidden: torch.Tensor):
# full_hidden是上面得到的
if hasattr(self, 'affine_start'):
# 得到的affined_start为(batch_size, seq_len, affine_dim)
affined_start = self.affine_start(full_hidden, batch.mask)
affined_end = self.affine_end(full_hidden, batch.mask)
else:
affined_start = self.affine(full_hidden, batch.mask)
affined_end = self.affine(full_hidden, batch.mask)

if hasattr(self, 'U'):
# affined_start: (batch, start_step, affine_dim) -> (batch, 1, start_step, affine_dim)
# affined_end: (batch, end_step, affine_dim) -> (batch, 1, affine_dim, end_step)
# scores1: (batch, 1, start_step, affine_dim) * (voc_dim, affine_dim, affine_dim) * (batch, 1, affine_dim, end_step) -> (batch, voc_dim, start_step, end_step)
scores1 = self.dropout(affined_start).unsqueeze(1).matmul(self.U).matmul(self.dropout(affined_end).permute(0, 2, 1).unsqueeze(1))
# scores: (batch, start_step, end_step, voc_dim)
scores = scores1.permute(0, 2, 3, 1)
else:
scores = 0

# affined_cat: (batch, start_step, end_step, affine_dim*2)
affined_cat = torch.cat([self.dropout(affined_start).unsqueeze(2).expand(-1, -1, affined_end.size(1), -1),
self.dropout(affined_end).unsqueeze(1).expand(-1, affined_start.size(1), -1, -1)], dim=-1)

if hasattr(self, 'size_embedding'):
# size_embedded: (start_step, end_step, emb_dim)
size_embedded = self.size_embedding(self._get_span_size_ids(full_hidden.size(1)))
# affined_cat: (batch, start_step, end_step, affine_dim*2 + emb_dim)
affined_cat = torch.cat([affined_cat, self.dropout(size_embedded).unsqueeze(0).expand(full_hidden.size(0), -1, -1, -1)], dim=-1)

# scores2: (voc_dim, affine_dim*2 + emb_dim) * (batch, start_step, end_step, affine_dim*2 + emb_dim, 1) -> (batch, start_step, end_step, voc_dim, 1)
scores2 = self.W.matmul(affined_cat.unsqueeze(-1))
# scores: (batch, start_step, end_step, voc_dim)
scores = scores + scores2.squeeze(-1)
return scores + self.b

这里默认使用双仿射编码,affine_start和affine_end的结构如下

1
2
3
4
5
6
7
8
9
10
11
12
FFNEncoder(
(dropout): CombinedDropout(
(dropout): Dropout(p=0.4, inplace=False)
)
(ff_blocks): ModuleList(
(0): FeedForwardBlock(
(proj_layer): Linear(in_features=868, out_features=150, bias=True)
(activation): ReLU()
(dropout): Dropout(p=0.0, inplace=False)
)
)
)

这里affined_cat的拼接部分看代码能看懂,但是不太确定这样做的理论依据;

关于size_embedding部分,其是一个随机初始化的embedding层,其含义是为span size构建的embedding,embd_dim=25(默认值)

1
2
3
if config.size_emb_dim > 0:
self.size_embedding = torch.nn.Embedding(config.max_size_id+1, config.size_emb_dim)
reinit_embedding_(self.size_embedding)

这里max_size_id是根据所有实体的长度统计出来的最大长度(简单理解是这样的,但是实际上是使用numpy.quantile根据实体长度的分布来选择一个相对来说可以覆盖绝大多数实体的长度)

1
2
3
4
span_sizes = [end-start for data in partitions for entry in data for label, start, end in entry['chunks']]
self.max_size_id = math.ceil(numpy.quantile(span_sizes, MAX_SIZE_ID_COV_RATE)) - 1
# 统计所有数据集中最长的句子长度
self.max_len = max(len(data_entry['tokens']) for data in partitions for data_entry in data)

size_embedding传入的参数值为self._get_span_size_ids(full_hidden.size(1))
full_hidden.size(1)是这一批次句子的最大长度

1
2
def _get_span_size_ids(self, seq_len: int):
return self._span_size_ids[:seq_len, :seq_len]

span_size_ids是根据最长的句子提前构建好的所有可能的跨度

1
2
3
4
5
self.register_buffer('_span_size_ids', torch.arange(config.max_len) - torch.arange(config.max_len).unsqueeze(-1))
# Create `_span_non_mask` before changing values of `_span_size_ids`
self.register_buffer('_span_non_mask', self._span_size_ids >= 0)
self._span_size_ids.masked_fill_(self._span_size_ids < 0, 0)
self._span_size_ids.masked_fill_(self._span_size_ids > config.max_size_id, config.max_size_id)

举个例子,假设句子长度为6

1
2
3
4
5
6
7
8
9
10
max_len = 6
torch.arange(max_len) - torch.arange(max_len).unsqueeze(-1)

# 得到如下张量
tensor([[ 0, 1, 2, 3, 4, 5],
[-1, 0, 1, 2, 3, 4],
[-2, -1, 0, 1, 2, 3],
[-3, -2, -1, 0, 1, 2],
[-4, -3, -2, -1, 0, 1],
[-5, -4, -3, -2, -1, 0]])

上面每一行表示这个位置往后续位置的跨度,所以负值是没有意义的,因为当前位置只会和后面的位置形成跨度,所以代码中也将所有为负值的位置修改为了0。
另外有些跨度太大也是不符合实际情况的,所以代码中进一步将跨度大于max_size的统一修改为max_size,例如这里句子长度为6,最大的跨度为3,那么经过调整后的span_size_ids为

1
2
3
4
5
6
tensor([[0, 1, 2, 3, 3, 3],
[0, 0, 1, 2, 3, 3],
[0, 0, 0, 1, 2, 3],
[0, 0, 0, 0, 1, 2],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0]])


接下来再回到BoundarySelectionDecoder的forward方法,第一步compute_score方法完成之后,得到的batch_score的size为(batch_size, seq_len, seq_len, 标签数量),最后一步就是计算损失了:
得到的batch_score最后一维的dim已经是标签总数了,也就表示已经是预测的最终值了,只差一个softmax了,然后boundaries_obj中的label_ids上面也详细介绍过,就是根据真实标签进行了一个标签平滑得到的概率值,所以就是以这两者来计算损失。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
def soft_label_cross_entropy(logits: torch.Tensor, soft_target: torch.Tensor, weight: torch.Tensor=None, reduction: str='none'):
"""Soft label cross entropy loss.

Parameters
----------
logits : torch.Tensor (num_entries, logit_dim)
Logits before softmax.
soft_target : torch.Tensor (num_entries, logit_dim)
The ground-truth distribution over indexes, s.t. `target.sum(dim=-1)` equals 1 in all entries.
weight : torch.Tensor (logit_dim, )
A manual rescaling weight given to each class.
"""
_check_soft_target(soft_target)

log_prob = logits.log_softmax(dim=-1)

if weight is not None:
log_prob = log_prob * weight

losses = -(log_prob * soft_target).sum(dim=-1)
return _reduce_losses(losses, sample_weight=None, reduction=reduction)

实验结果

不使用bert的结果:
[2024-12-08 16:47:37 INFO] Evaluating on dev-set
[2024-12-08 16:47:37 INFO] ER | Micro Precision: 86.340%
[2024-12-08 16:47:37 INFO] ER | Micro Recall: 88.243%
[2024-12-08 16:47:37 INFO] ER | Micro F1-score: 87.281%
[2024-12-08 16:47:37 INFO] ER | Macro Precision: 91.373%
[2024-12-08 16:47:37 INFO] ER | Macro Recall: 87.017%
[2024-12-08 16:47:37 INFO] ER | Macro F1-score: 88.133%
[2024-12-08 16:47:37 INFO] Evaluating on test-set
[2024-12-08 16:47:37 INFO] ER | Micro Precision: 88.380%
[2024-12-08 16:47:37 INFO] ER | Micro Recall: 90.061%
[2024-12-08 16:47:37 INFO] ER | Micro F1-score: 89.213%
[2024-12-08 16:47:37 INFO] ER | Macro Precision: 86.640%
[2024-12-08 16:47:37 INFO] ER | Macro Recall: 85.124%
[2024-12-08 16:47:37 INFO] ER | Macro F1-score: 85.496%

使用bert:
Evaluating on dev-set
[2024-12-10 12:55:57 INFO] ER | Micro Precision: 96.862%
[2024-12-10 12:55:57 INFO] ER | Micro Recall: 96.927%
[2024-12-10 12:55:57 INFO] ER | Micro F1-score: 96.895%
[2024-12-10 12:55:57 INFO] ER | Macro Precision: 97.073%
[2024-12-10 12:55:57 INFO] ER | Macro Recall: 98.928%
[2024-12-10 12:55:57 INFO] ER | Macro F1-score: 97.931%
[2024-12-10 12:55:57 INFO] Evaluating on test-set
[2024-12-10 12:55:59 INFO] ER | Micro Precision: 95.856%
[2024-12-10 12:55:59 INFO] ER | Micro Recall: 96.503%
[2024-12-10 12:55:59 INFO] ER | Micro F1-score: 96.179%
[2024-12-10 12:55:59 INFO] ER | Macro Precision: 97.306%
[2024-12-10 12:55:59 INFO] ER | Macro Recall: 98.834%
[2024-12-10 12:55:59 INFO] ER | Macro F1-score: 98.030%

应该是看micro F1,这里针对ResumeNER数据集,得到的结果是96.179%,比论文中的结果略低一些。

我的数据集:
目前最优:
默认emb_dim=100:99.274%~99.31%

还需要做一些对比实验

–scheduler LinearDecayWithWarmup 99.275%
–emb_dim 0 99.233%
–emd_dim 100+softword/softlexicon F1-score: 99.354%
–emd_dim 0+softword/softlexicon F1-score: 99.234%
–use_interm2

默认emb_dim=100,然后如果使用bert的话,默认emd_dim=0,那么就缺少了OneHotConfig那一层,在计算full_hidden的时候就少了经过ohots层的词向量编码,embedded:(batch_size, seq_len, embed_dim),就只有bert的编码。

目前是未使用单个字的预训练embedding的;
也未使用词信息,所以改进就是融合词信息,借鉴simple lexicon的思想,并且额外增加大模型词向量