效果:
要实现根据字母做补全,就必须对文档按照拼音分词,在github 上已经有elasticsearch 的拼音分词插件:
-
拼音分词器下载地址:(https://github.com/medcl/elasticsearch-analysis-pinyin)[https://github.com/medcl/elasticsearch-analysis-pinyin]
-
下载解压好后上传到es 插件目录:/var/lib/docker/volumes/es-plugins/_data
-
重启es
-
测试拼音分词器
POST /_analyze
{
"analyzer": "pinyin",
"text": "如家酒店真不错"
}
结果:
{
"tokens" : [
{
"token" : "ru",
"start_offset" : 0,
"end_offset" : 0,
"type" : "word",
"position" : 0
},
{
"token" : "rjjdzbc",
"start_offset" : 0,
"end_offset" : 0,
"type" : "word",
"position" : 0
},
{
"token" : "jia",
"start_offset" : 0,
"end_offset" : 0,
"type" : "word",
"position" : 1
},
{
"token" : "jiu",
"start_offset" : 0,
"end_offset" : 0,
"type" : "word",
"position" : 2
},
{
"token" : "dian",
"start_offset" : 0,
"end_offset" : 0,
"type" : "word",
"position" : 3
},
{
"token" : "zhen",
"start_offset" : 0,
"end_offset" : 0,
"type" : "word",
"position" : 4
},
{
"token" : "bu",
"start_offset" : 0,
"end_offset" : 0,
"type" : "word",
"position" : 5
},
{
"token" : "cuo",
"start_offset" : 0,
"end_offset" : 0,
"type" : "word",
"position" : 6
}
]
}
标签:end,补全,start,token,自动,offset,word,type,ES
From: https://www.cnblogs.com/czzz/p/17739805.html