我有一个
HTML文档,我想使用spaCy对其进行标记,同时将HTML标记保留为单个标记.
这是我的代码:
import spacy
from spacy.symbols import ORTH
nlp = spacy.load('en', vectors=False, parser=False, entity=False)
nlp.tokenizer.add_special_case(u'<i>', [{ORTH: u'<i>'}])
nlp.tokenizer.add_special_case(u'</i>', [{ORTH: u'</i>'}])
doc = nlp('Hello, <i>world</i> !')
print([e.text for e in doc])
输出是:
['Hello', ',', '<', 'i', '>', 'world</i', '>', '!']
如果我在标签周围放置空格,如下所示:
doc = nlp('Hello, <i> world </i> !')
输出是我想要的:
['Hello', ',', '<i>', 'world', '</i>', '!']
但我想避免对HTML进行复杂的预处理.
知道我该如何处理这个问题?
最佳答案 您需要创建自定义Tokenizer.
您的自定义Tokenizer将与spaCy的标记器完全相同,但它将具有’<‘和’>‘从前缀和后缀中删除的符号,还会添加一个新前缀和一个新后缀规则.
码:
import spacy
from spacy.tokens import Token
Token.set_extension('tag', default=False)
def create_custom_tokenizer(nlp):
from spacy import util
from spacy.tokenizer import Tokenizer
from spacy.lang.tokenizer_exceptions import TOKEN_MATCH
prefixes = nlp.Defaults.prefixes + ('^<i>',)
suffixes = nlp.Defaults.suffixes + ('</i>$',)
# remove the tag symbols from prefixes and suffixes
prefixes = list(prefixes)
prefixes.remove('<')
prefixes = tuple(prefixes)
suffixes = list(suffixes)
suffixes.remove('>')
suffixes = tuple(suffixes)
infixes = nlp.Defaults.infixes
rules = nlp.Defaults.tokenizer_exceptions
token_match = TOKEN_MATCH
prefix_search = (util.compile_prefix_regex(prefixes).search)
suffix_search = (util.compile_suffix_regex(suffixes).search)
infix_finditer = (util.compile_infix_regex(infixes).finditer)
return Tokenizer(nlp.vocab, rules=rules,
prefix_search=prefix_search,
suffix_search=suffix_search,
infix_finditer=infix_finditer,
token_match=token_match)
nlp = spacy.load('en_core_web_sm')
tokenizer = create_custom_tokenizer(nlp)
nlp.tokenizer = tokenizer
doc = nlp('Hello, <i>world</i> !')
print([e.text for e in doc])