我试图对关键字类型字段进行不区分大小写的聚合,但是我遇到了让它工作的问题.
我到目前为止尝试的是添加一个名为“lowercase”的自定义分析器,它使用“关键字”标记器和“小写”过滤器.然后,我为要使用的字段添加了一个名为“use_lowercase”的映射字段.我想保留现有的“text”和“keyword”字段组件,因为我可能想要搜索字段中的术语.
这是索引定义,包括自定义分析器:
PUT authors
{
"settings": {
"analysis": {
"analyzer": {
"lowercase": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
}
},
"mappings": {
"famousbooks": {
"properties": {
"Author": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
},
"use_lowercase": {
"type": "text",
"analyzer": "lowercase"
}
}
}
}
}
}
}
现在我使用相同的作者添加2条记录,但具有不同的大小写:
POST authors/famousbooks/1
{
"Book": "The Mysterious Affair at Styles",
"Year": 1920,
"Price": 5.92,
"Genre": "Crime Novel",
"Author": "Agatha Christie"
}
POST authors/famousbooks/2
{
"Book": "And Then There Were None",
"Year": 1939,
"Price": 6.99,
"Genre": "Mystery Novel",
"Author": "Agatha christie"
}
到现在为止还挺好.现在,如果我根据作者进行术语聚合,
GET authors/famousbooks/_search
{
"size": 0,
"aggs": {
"authors-aggs": {
"terms": {
"field": "Author.use_lowercase"
}
}
}
}
我得到以下结果:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [Author.use_lowercase] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "authors",
"node": "yxcoq_eKRL2r6JGDkshjxg",
"reason": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [Author.use_lowercase] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
}
],
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [Author.use_lowercase] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
},
"status": 400
}
所以在我看来,聚合认为搜索字段是文本而不是关键字,因此给了我fielddata警告.我认为ES足够复杂,可以认识到术语字段实际上是一个关键字(通过自定义分析器),因此可聚合,但似乎并非如此.
如果我添加“fielddata”:对于Author的映射是真的,那么聚合工作正常,但是在设置此值时给出了高堆使用率的可怕警告,我犹豫不决.
是否有进行此类不敏感关键字聚合的最佳做法?我希望我只能说“类型”:“关键字”,“过滤器”:映射部分中的“小写”,但似乎不可用.
如果我选择“fielddata”,我觉得我必须使用太大的棒才能让它工作:真正的路线.任何有关这方面的帮助将不胜感激!
最佳答案 事实证明,解决方案是使用自定义规范化器而不是自定义分析器.
PUT authors
{
"settings": {
"analysis": {
"normalizer": {
"myLowercase": {
"type": "custom",
"filter": [ "lowercase" ]
}
}
}
},
"mappings": {
"famousbooks": {
"properties": {
"Author": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
},
"use_lowercase": {
"type": "keyword",
"normalizer": "myLowercase",
"ignore_above": 256
}
}
}
}
}
}
}
然后,这允许使用字段Author.use_lowercase进行术语聚合而不会出现问题.