目录
- 全文检索ES
- ES入门概念
- Docker安装ES
- 基本操作举栗
- 查询Query DSL
- 聚合aggregations
- 映射Mapping
- 安装ik分词器
- 安装Nginx,配置远程词库
- 整合SpringBoot
- 新建微服务模块,导入依赖
- 配置
- 使用测试
- 项目中使用场景
- 商城业务
- 商品上架
- ES的Mapping设计
- 上架代码编写
- Feign源码
- 封装消息返回R细节问题
一、全文检索ES
1.ES入门概念
概念
- 索引:好比关系型数据库的数据库
- 类型:好比关系型数据库的表(注意6.0之后类型type已经被移除,默认类型_doc,简单理解文档直接被存在索引下)
- 文档:好比关系型数据库的记录
- 属性:好比关系型数据库的字段
倒排索引
- 速度快的底层原理就是倒排索引
- 简单理解就是将
整句
分为单词
,倒排索引维护的就是哪些单词
在哪些文档(记录)
中存在,在不扫描全部文档的情况下就能找到 单词(一个单词存在 多个文档,倒排索引相当于map,能直接找到) - 相关性得分:待搜索的词会被拆分为 若干个 单词,这些单词 存在 多个文档,其中 单词密集高的 文档 相关性就高,即所找。
2.Docker下安装ES
基本步骤
- 执行
docker pull elasticsearch:7.4.2
、docker pull kibana:7.4.2
拉去镜像
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker pull elasticsearch:7.4.2
7.4.2: Pulling from library/elasticsearch
d8d02d457314: Pull complete
f26fec8fc1eb: Pull complete
8177ad1fe56d: Pull complete
d8fdf75b73c1: Pull complete
47ac89c1da81: Pull complete
fc8e09b48887: Pull complete
367b97f47d5c: Pull complete
Digest: sha256:543bf7a3d61781bad337d31e6cc5895f16b55aed4da48f40c346352420927f74
Status: Downloaded newer image for elasticsearch:7.4.2
docker.io/library/elasticsearch:7.4.2
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker pull kibana:7.4.2
7.4.2: Pulling from library/kibana
d8d02d457314: Already exists
bc64069ca967: Pull complete
c7aae8f7d300: Pull complete
8da0971e3b41: Pull complete
58ea4bb2901c: Pull complete
b1e21d4c2a7e: Pull complete
3953eac632cb: Pull complete
5f4406500758: Pull complete
340d85e0d1c7: Pull complete
1768564d16fb: Pull complete
Digest: sha256:355f9c979dc9cdac3ff9a75a817b8b7660575e492bf7dbe796e705168f167efc
Status: Downloaded newer image for kibana:7.4.2
docker.io/library/kibana:7.4.2
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mysql 5.7 9f1d21c1025a 13 days ago 448MB
redis latest 08502081bff6 5 weeks ago 105MB
kibana 7.4.2 230d3ded1abc 21 months ago 1.1GB
elasticsearch 7.4.2 b1179d41a7b4 21 months ago 855MB
[root@iZ2vc8owmlobwkazif1efpZ ~]#
- 安装ES准备、启动、遇到问题解决、访问测试
1. 准备环境,用于docker挂载
[root@iZ2vc8owmlobwkazif1efpZ ~]# mkdir -p /mydata/elasticsearch/config
[root@iZ2vc8owmlobwkazif1efpZ ~]# mkdir -p /mydata/elasticsearch/data
[root@iZ2vc8owmlobwkazif1efpZ ~]# echo "http.host: 0.0.0.0" >> /mydata/elasticsearch/config/elasticsearch.yml
2. 启动
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \
> -e "discovery.type=single-node" \ =======>单节点而不是集群
> -e ES_JAVA_OPTS="-Xms64m -Xmx512m" \ ======>最小、最大占用内存
> -v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \ ====> 配置文件挂载目录
> -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \ ====>数据挂载目录
> -v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \ ====>插件挂载目录。后续可以在该目录下安装插件
> -d elasticsearch:7.4.2
52ea4c88d637bf5139bd6e598dd4d79815503b47b046ca1f547a9efe1627daa6
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f26a09fb394e redis "docker-entrypoint.s…" 11 days ago Up 11 days 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
96f1910989d5 mysql:5.7 "docker-entrypoint.s…" 11 days ago Up 4 days 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
52ea4c88d637 elasticsearch:7.4.2 "/usr/local/bin/dock…" About a minute ago Exited (1) About a minute ago elasticsearch
f26a09fb394e redis "docker-entrypoint.s…" 11 days ago Up 11 days 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
96f1910989d5 mysql:5.7 "docker-entrypoint.s…" 11 days ago Up 4 days 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql
[root@iZ2vc8owmlobwkazif1efpZ ~]#
3. 发现并没有启动成功
查看日志,权限问题
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker logs elasticsearch
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
{"type": "server", "timestamp": "2021-08-02T10:59:51,444Z", "level": "WARN", "component": "o.e.b.ElasticsearchUncaughtExceptionHandler", "cluster.name": "elasticsearch", "node.name": "52ea4c88d637", "message": "uncaught exception in thread [main]",
"stacktrace": ["org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125) ~[elasticsearch-cli-7.4.2.jar:7.4.2]",
"at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.4.2.jar:7.4.2]",
"Caused by: org.elasticsearch.ElasticsearchException: failed to bind service",
"at org.elasticsearch.node.Node.<init>(Node.java:614) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.node.Node.<init>(Node.java:255) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:221) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.4.2.jar:7.4.2]",
"... 6 more",
"Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes",
"at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]",
"at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:389) ~[?:?]",
"at java.nio.file.Files.createDirectory(Files.java:693) ~[?:?]",
"at java.nio.file.Files.createAndCheckIsDirectory(Files.java:800) ~[?:?]",
"at java.nio.file.Files.createDirectories(Files.java:786) ~[?:?]",
"at org.elasticsearch.env.NodeEnvironment.lambda$new$0(NodeEnvironment.java:272) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:209) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:269) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.node.Node.<init>(Node.java:275) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.node.Node.<init>(Node.java:255) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:221) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.4.2.jar:7.4.2]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.4.2.jar:7.4.2]",
"... 6 more"] }
[root@iZ2vc8owmlobwkazif1efpZ ~]#
4. 递归改变读写权限,再次启动
[root@iZ2vc8owmlobwkazif1efpZ ~]# chmod -R 777 /mydata/elasticsearch
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker start 52e
52e
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
52ea4c88d637 elasticsearch:7.4.2 "/usr/local/bin/dock…" 4 minutes ago Up 8 seconds 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp elasticsearch
f26a09fb394e redis "docker-entrypoint.s…" 11 days ago Up 11 days 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
96f1910989d5 mysql:5.7 "docker-entrypoint.s…" 11 days ago Up 4 days 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql
[root@iZ2vc8owmlobwkazif1efpZ ~]#
5. 因为我的是云服务器,需要开启9200安全组策略 之后才能访问
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker start 52e
52e
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
52ea4c88d637 elasticsearch:7.4.2 "/usr/local/bin/dock…" 14 minutes ago Up 4 seconds 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp elasticsearch
f26a09fb394e redis "docker-entrypoint.s…" 11 days ago Up About a minute 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
96f1910989d5 mysql:5.7 "docker-entrypoint.s…" 11 days ago Up About a minute 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql
[root@iZ2vc8owmlobwkazif1efpZ ~]#
- 安装kibana
1. 启动kibana
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker run --name kibana -e ELASTICSEARCH_HOSTS=http://47.108.148.53:9200/ -p 5601:5601 \
> -d kibana:7.4.2
ff1bbd4c6d294a5543c4f04788a5fe51c6625d1dc06ae97603abe74328c52ee4
[root@iZ2vc8owmlobwkazif1efpZ ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff1bbd4c6d29 kibana:7.4.2 "/usr/local/bin/dumb…" 3 seconds ago Up 3 seconds 5601/tcp, 0.0.0.0:5601->3344/tcp, :::5601->3344/tcp kibana
52ea4c88d637 elasticsearch:7.4.2 "/usr/local/bin/dock…" 24 minutes ago Up 10 minutes 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp elasticsearch
f26a09fb394e redis "docker-entrypoint.s…" 11 days ago Up 11 minutes 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis
96f1910989d5 mysql:5.7 "docker-entrypoint.s…" 11 days ago Up 11 minutes 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql
- 设置自启动:
docker update xxx --restart=always
3.基本操作举栗
GET _cat/health // 查看分片健康状态
GET _cat/master // 查看主节点
Get _cat/indices // 查看全部的索引(数据库)
4.搜索查询Query DSL
ES 支持两种基本方式检索 :
- 一个是通过使用 REST request URI 发送搜索参数(uri+检索参数)
- Query DSL : 另一个是通过使用 REST request body 来发送它们(uri+请求体)
Query DSL 的基本语法格式
- 外边一个
{ }
表示请求体 - 里面可以放多个
"xxx" : { 条件 }
作为要进行何种操作
# 第一种方式,将查询参数放到url中
GET bank/_search?q=*&sort=account_number:asc
# 第二种方式,将查询参数放到请求体中
GET bank/_search
{
"query": {
"match_all": {}
},
"sort": [
{
"account_number": "asc"
},
{
"balance": "desc"
}
]
}
操作举栗
1.属性匹配match
"match": { 属性: xxx }
可以放到query中,match_all : { }
表示查询全部
- 精确匹配非字符型 数值
- 模糊匹配
举栗
2.短语匹配match_phrase
3.多字段匹配multi_match
4.符合查询bool
搭配
must: [ ]
must_not: [ ]
-
should: [ ]
满足的话得分会更高
5.过滤查询filter
filter的特点就是不像must、must_not、should等这些会贡献等分,查出来的结果分都是0
6.文本类型检索term
5.聚合查询aggregations
参考文档:https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations.html
聚合将数据汇总为指标、统计数据或其他分析。聚合有助于您回答诸如:
- 我的网站的平均加载时间是什么时候?
- 根据交易量,谁是我最有价值的客户?
- 我的网络上什么会被认为是一个大文件?
- 每个产品类别中有多少个产品?
举栗
GET /bank/_search
{
"aggs": {
"ageAggs": {
"terms": {
"field": "age"
}
}
}
}
聚合可以嵌套聚合查询
6.映射Mapping
可以查看属性的类型
创建索引前指定映射
更多属性参考文档:https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-types.html
不支持更新已经存在的属性类型,但是可以重新创建需要的索引,然后进行数据迁移
# 不能更新已经存在的属性类型,若要更改需要数据前移重新创建索引
# 1. 创建新索引
PUT /my-newindex
{
"mappings": {
"properties": {
"address": {
"type": "keyword",
"index": false
},
"age": {
"type": "integer"
},
"email": {
"type": "text"
},
"name": {
"type": "text"
}
}
}
}
GET /my-index-000001/_search
GET /my-newindex/_search
GET /my-index-000001/_mapping
GET /my-newindex/_mapping
# 2. 数据前移
POST _reindex
{
"source": {
"index": "my-index-000001"
},
"dest": {
"index":"my-newindex"
}
}
7.安装IK分词器
基本步骤
- 进入外部挂载的plugins目录,执行
wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.4.2/elasticsearch-analysis-ik-7.4.2.zip
- 解压文件删除zip,
yum install unzip
,unzip elasticsearch-analysis-ik-7.4.2.zip
,rm -rf *.zip
- 将解压到的文件 全部 移动到 新创建一个
ik
文件夹下 - 进入容器内部查看分词器,然后外部重启ES
[root@iZ2vc8owmlobwkazif1efpZ /]# docker exec -it 52e /bin/bash
[root@52ea4c88d637 elasticsearch]# ls
LICENSE.txt NOTICE.txt README.textile bin config data jdk lib logs modules plugins
[root@52ea4c88d637 elasticsearch]# cd ./bin/
[root@52ea4c88d637 bin]# ls
elasticsearch elasticsearch-enve elasticsearch-setup-passwords x-pack-env
elasticsearch-certgen elasticsearch-keystore elasticsearch-shard x-pack-security-env
elasticsearch-certutil elasticsearch-migrate elasticsearch-sql-cli x-pack-watcher-env
elasticsearch-cli elasticsearch-node elasticsearch-sql-cli-7.4.2.jar
elasticsearch-croneval elasticsearch-plugin elasticsearch-syskeygen
elasticsearch-env elasticsearch-saml-metadata elasticsearch-users
[root@52ea4c88d637 bin]# elasticsearch-plugin list
ik
[root@52ea4c88d637 bin]#
测试
8.安装Nginx放置远程词库
1.安装基本步骤
- docker下载Nginx :
docker pull nginx:1.10
- 创建Nginx外部挂载目录
mkdir nginx
- 首次启动将容器内的配置文件拷贝到当前目录,
docker run -p 80:80 --name nginx -d nginx:1.10
、docker container cp nginx:/etc/nginx .
- 停掉刚才启动的nginx,
docker stop d47
- 新建conf文件夹,将nginx的配置文件全部移动到里面,然后启动nginx
[root@iZ2vc8owmlobwkazif1efpZ conf]# docker run -p 80:80 --name nginx \
> -v /mydata/nginx/html:/usr/share/nginx/html \
> -v /mydata/nginx/logs:/var/log/nginx \
> -v /mydata/nginx/conf:/etc/nginx \
> -d nginx:1.10
ac29e64e1aafdbe2b1e2fd2745dde0d86742e0010c52edeb09fdd5e067f32fdc
[root@iZ2vc8owmlobwkazif1efpZ conf]#
2.新建词库
3.配置ES的远程词库
重启ES测试
二、整合SpingBoot
1.新建微服务模块导入依赖
<!-- ES依赖-->
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>7.4.2</version>
</dependency>
2.配置
新建配置类
package henu.soft.xiaosi.search.config;
import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class MyElasticSearchConfig {
@Bean
RestHighLevelClient client() {
RestClientBuilder builder = RestClient.builder(new HttpHost("47.108.148.53", 9200, "http"));
return new RestHighLevelClient(builder);
}
}
配置注册中心
3.使用测试
package henu.soft.xiaosi.search;
import org.elasticsearch.client.RestHighLevelClient;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
@SpringBootTest
class GuliShopSearchApplicationTests {
@Autowired
RestHighLevelClient restHighLevelClient;
@Test
void contextLoads() {
System.out.println(restHighLevelClient); //org.elasticsearch.client.RestHighLevelClient@30bbe83
}
}
@Test
void test1() throws IOException {
Product product = new Product();
product.setSpuName("华为");
product.setId(10L);
IndexRequest request = new IndexRequest("product").id("20")
.source("spuName","华为","id",20L);
try {
// 1.
IndexResponse response = client.index(request, RequestOptions.DEFAULT);
System.out.println(request.toString());
IndexResponse response2 = client.index(request, RequestOptions.DEFAULT);
} catch (ElasticsearchException e) {
if (e.status() == RestStatus.CONFLICT) {
}
}
}
@Test
void test3() throws IOException {
// 1. 查询api
SearchRequest search = new SearchRequest();
// 2. 指定索引
search.indices("bank");
// 3. 查询条件构建
SearchSourceBuilder ssb = new SearchSourceBuilder();
// 3.1查询
ssb.query(QueryBuilders.matchAllQuery());
// 3.2聚合
// 按照年龄值分布聚合
TermsAggregationBuilder aggAgg = AggregationBuilders.terms("agaAgg").field("aga").size(10);
ssb.aggregation(aggAgg);
// 按照薪资聚合求均值
TermsAggregationBuilder balanceAgg = AggregationBuilders.terms("balanceAgg").field("balance").size(10);
ssb.aggregation(balanceAgg);
// 4. 组合查询条件
search.source(ssb);
// 5. 执行查询
SearchResponse response = client.search(search, RequestOptions.DEFAULT);
// 6. 解析结果(Hits 和 聚合的值)
// 6.1 获取所有查到的数据
// 拿到外边的hits
SearchHits hits = response.getHits();
// 拿到里面的数据hits
SearchHit[] dataHits = hits.getHits();
for (SearchHit dataHit : dataHits) {
String id = dataHit.getId();
String index = dataHit.getIndex();
float score = dataHit.getScore();
String type = dataHit.getType();
long seqNo = dataHit.getSeqNo();
Map<String, Object> sourceAsMap = dataHit.getSourceAsMap();
System.out.println(id +"==" + index +"==" + score +"==" + type +"==" + seqNo);
System.out.println(sourceAsMap);
}
System.out.println(response);
}
输出
1==bank==1.0==account==-2
{account_number=1, firstname=Amber, address=880 Holmes Lane, balance=39225, gender=M, city=Brogan, employer=Pyrami, state=IL, age=32, email=amberduke@pyrami.com, lastname=Duke}
6==bank==1.0==account==-2
{account_number=6, firstname=Hattie, address=671 Bristol Street, balance=5686, gender=M, city=Dante, employer=Netagy, state=TN, age=36, email=hattiebond@netagy.com, lastname=Bond}
13==bank==1.0==account==-2
{account_number=13, firstname=Nanette, address=789 Madison Street, balance=32838, gender=F, city=Nogal, employer=Quility, state=VA, age=28, email=nanettebates@quility.com, lastname=Bates}
18==bank==1.0==account==-2
{account_number=18, firstname=Dale, address=467 Hutchinson Court, balance=4180, gender=M, city=Orick, employer=Boink, state=MD, age=33, email=daleadams@boink.com, lastname=Adams}
20==bank==1.0==account==-2
{account_number=20, firstname=Elinor, address=282 Kings Place, balance=16418, gender=M, city=Ribera, employer=Scentric, state=WA, age=36, email=elinorratliff@scentric.com, lastname=Ratliff}
25==bank==1.0==account==-2
{account_number=25, firstname=Virginia, address=171 Putnam Avenue, balance=40540, gender=F, city=Nicholson, employer=Filodyne, state=PA, age=39, email=virginiaayala@filodyne.com, lastname=Ayala}
32==bank==1.0==account==-2
{account_number=32, firstname=Dillard, address=702 Quentin Street, balance=48086, gender=F, city=Veguita, employer=Quailcom, state=IN, age=34, email=dillardmcpherson@quailcom.com, lastname=Mcpherson}
37==bank==1.0==account==-2
{account_number=37, firstname=Mcgee, address=826 Fillmore Place, balance=18612, gender=M, city=Tooleville, employer=Reversus, state=OK, age=39, email=mcgeemooney@reversus.com, lastname=Mooney}
44==bank==1.0==account==-2
{account_number=44, firstname=Aurelia, address=502 Baycliff Terrace, balance=34487, gender=M, city=Yardville, employer=Orbalix, state=DE, age=37, email=aureliaharding@orbalix.com, lastname=Harding}
49==bank==1.0==account==-2
{account_number=49, firstname=Fulton, address=451 Humboldt Street, balance=29104, gender=F, city=Sunriver, employer=Anocha, state=RI, age=23, email=fultonholt@anocha.com, lastname=Holt}
{"took":13,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1000,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"bank","_type":"account","_id":"1","_score":1.0,"_source":{"account_number":1,"balance":39225,"firstname":"Amber","lastname":"Duke","age":32,"gender":"M","address":"880 Holmes Lane","employer":"Pyrami","email":"amberduke@pyrami.com","city":"Brogan","state":"IL"}},{"_index":"bank","_type":"account","_id":"6","_score":1.0,"_source":{"account_number":6,"balance":5686,"firstname":"Hattie","lastname":"Bond","age":36,"gender":"M","address":"671 Bristol Street","employer":"Netagy","email":"hattiebond@netagy.com","city":"Dante","state":"TN"}},{"_index":"bank","_type":"account","_id":"13","_score":1.0,"_source":{"account_number":13,"balance":32838,"firstname":"Nanette","lastname":"Bates","age":28,"gender":"F","address":"789 Madison Street","employer":"Quility","email":"nanettebates@quility.com","city":"Nogal","state":"VA"}},{"_index":"bank","_type":"account","_id":"18","_score":1.0,"_source":{"account_number":18,"balance":4180,"firstname":"Dale","lastname":"Adams","age":33,"gender":"M","address":"467 Hutchinson Court","employer":"Boink","email":"daleadams@boink.com","city":"Orick","state":"MD"}},{"_index":"bank","_type":"account","_id":"20","_score":1.0,"_source":{"account_number":20,"balance":16418,"firstname":"Elinor","lastname":"Ratliff","age":36,"gender":"M","address":"282 Kings Place","employer":"Scentric","email":"elinorratliff@scentric.com","city":"Ribera","state":"WA"}},{"_index":"bank","_type":"account","_id":"25","_score":1.0,"_source":{"account_number":25,"balance":40540,"firstname":"Virginia","lastname":"Ayala","age":39,"gender":"F","address":"171 Putnam Avenue","employer":"Filodyne","email":"virginiaayala@filodyne.com","city":"Nicholson","state":"PA"}},{"_index":"bank","_type":"account","_id":"32","_score":1.0,"_source":{"account_number":32,"balance":48086,"firstname":"Dillard","lastname":"Mcpherson","age":34,"gender":"F","address":"702 Quentin Street","employer":"Quailcom","email":"dillardmcpherson@quailcom.com","city":"Veguita","state":"IN"}},{"_index":"bank","_type":"account","_id":"37","_score":1.0,"_source":{"account_number":37,"balance":18612,"firstname":"Mcgee","lastname":"Mooney","age":39,"gender":"M","address":"826 Fillmore Place","employer":"Reversus","email":"mcgeemooney@reversus.com","city":"Tooleville","state":"OK"}},{"_index":"bank","_type":"account","_id":"44","_score":1.0,"_source":{"account_number":44,"balance":34487,"firstname":"Aurelia","lastname":"Harding","age":37,"gender":"M","address":"502 Baycliff Terrace","employer":"Orbalix","email":"aureliaharding@orbalix.com","city":"Yardville","state":"DE"}},{"_index":"bank","_type":"account","_id":"49","_score":1.0,"_source":{"account_number":49,"balance":29104,"firstname":"Fulton","lastname":"Holt","age":23,"gender":"F","address":"451 Humboldt Street","employer":"Anocha","email":"fultonholt@anocha.com","city":"Sunriver","state":"RI"}}]},"aggregations":{"lterms#balanceAgg":{"doc_count_error_upper_bound":0,"sum_other_doc_count":985,"buckets":[{"key":22026,"doc_count":2},{"key":23285,"doc_count":2},{"key":36038,"doc_count":2},{"key":39063,"doc_count":2},{"key":45493,"doc_count":2},{"key":1011,"doc_count":1},{"key":1031,"doc_count":1},{"key":1110,"doc_count":1},{"key":1133,"doc_count":1},{"key":1172,"doc_count":1}]},"sterms#agaAgg":{"doc_count_error_upper_bound":0,"sum_other_doc_count":0,"buckets":[]}}}
使用参考:https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html
4.项目中使用场景
可以
- 首页商品的检索功能
- 日志的检索定位ELK
- …
三、商城业务
1.商品上架
- 从数据库保存到ES中展示在首页。
- 上架的商品才可以在网站展示。
- 上架的商品需要可以被检索
分析:商品上架在 es 中是存 sku 还是 spu?
- 1)、检索的时候输入名字,是需要按照 sku 的 title 进行全文检索的
- 2)、检索使用商品规格,规格是 spu 的公共属性,每个 spu 是一样的
- 3)、按照分类 id 进去的都是直接列出 spu 的,还可以切换。
- 4)、我们如果将 sku 的全量信息保存到 es 中(包括 spu 属性)就太多量字段了。
- 5)、我们如果将 spu 以及他包含的 sku 信息保存到 es 中,也可以方便检索。但是 sku 属于
spu 的级联对象,在 es 中需要 nested 模型,这种性能差点。 - 6)、但是存储与检索我们必须性能折中。
- 7)、如果我们分拆存储,spu 和 attr 一个索引,sku 单独一个索引可能涉及的问题。
检索商品的名字,如“手机”,对应的 spu 有很多,我们要分析出这些 spu 的所有关联属性,
再做一次查询,就必须将所有 spu_id 都发出去。假设有 1 万个数据,数据传输一次就
10000*4=4MB;并发情况下假设 1000 检索请求,那就是 4GB 的数据,,传输阻塞时间会很
长,业务更加无法继续。
所以,我们如下设计,这样才是文档区别于关系型数据库的地方,宽表设计,不能去考虑数
据库范式。
2.ES的Mapping的设计
PUT /product
{
"mappings": {
"properties": {
"skuId": {
"type": "long"
},
"spuId": {
"type": "keyword"
},
"skuTitle": {
"type": "text",
"analyzer": "ik_smart"
},
"skuPrice": {
"type": "keyword"
},
"skuImg": {
"type": "keyword",
"index": false,
"doc_values": false
},
"saleCount": {
"type": "long"
},
"hasStock": {
"type": "boolean"
},
"hotScore": {
"type": "long"
},
"brandId": {
"type": "long"
},
"catalogId": {
"type": "long"
},
"brandName": {
"type": "keyword",
"index": false,
"doc_values": false
},
"brandImg": {
"type": "keyword",
"index": false,
"doc_values": false
},
"catalogName": {
"type": "keyword",
"index": false,
"doc_values": false
},
"attrs": {
"type": "nested",
"properties": {
"attrId": {
"type": "long"
},
"attrName": {
"type": "keyword",
"index": false,
"doc_values": false
},
"attrValue": {
"type": "keyword"
}
}
}
}
}
}
细节
上架是将后台的商品放在 es 中可以提供检索和查询功能
- 1)、hasStock:代表是否有库存。默认上架的商品都有库存。如果库存无货的时候才需要
更新一下 es - 2)、库存补上以后,也需要重新更新一下 es
- 3)、hotScore 是热度值,我们只模拟使用点击率更新热度。点击率增加到一定程度才更新
热度值。 - 4)、下架就是从 es 中移除检索项,以及修改 mysql 状态
商品上架步骤:
- 1)、先在 es 中按照之前的 mapping 信息,建立 product 索引。
- 2)、点击上架,查询出所有 sku 的信息,保存到 es 中
- 3)、es 保存成功返回,更新数据库的上架状态信息
数据的一致性
- 1)、商品无库存的时候需要更新 es 的库存信息
- 2)、商品有库存也要更新 es
3.上架代码编写
点击的是spu上架,传递spuId需要查出来全部的sku信息,以及从全部属性中挑出来可以被检索的属性,以及一些其他需要的属性,通过遍历然后set
的方式将 SkuEntires 转为 SkuEsModel 中,最后将List<SkuEsModel>
存入ES
- 1、查出当前spuId对应的所有sku信息,品牌的名字,封装到skus的list集合中
- 2、封装每个sku的信息
- //TODO 1、发送远程调用ware服务,库存系统查询是否有库存(首先根据skus取出来所有的skuIds,然后查询结果封装成
Map<Integer,Boolean>
的形式返回),因为需要对每个sku设置该属性,可以在循环外边一次性查出来。(外边查) - //TODO 2、热度评分。封装实体skuEsModel添加hotScore属性,默认每个sku都设置为0
- //TODO 3、查询品牌和分类的名字信息(外边查)
- //TODO 4、查出当前sku的所有可以被用来检索的规格属性(外边查)
- //TODO 5、将数据发给微服务search从而es进行保存
1.product微服务
/**
* 商品上架
* @param spuId
*/
@Override
public void up(Long spuId) {
//1、查出当前spuId对应的所有sku信息,品牌的名字
List<SkuInfoEntity> skuInfoEntities = skuInfoService.getSkusBySpuId(spuId);
//TODO 4、查出当前sku的所有可以被用来检索的规格属性
List<ProductAttrValueEntity> productAttrValueEntities = productAttrValueService.list(new QueryWrapper<ProductAttrValueEntity>().eq("spu_id", spuId));
List<Long> attrIds = productAttrValueEntities.stream().map(attr -> {
return attr.getAttrId();
}).collect(Collectors.toList());
// 获取能检索属性,search_type = 1 的
List<Long> searchIds= attrService.selectSearchAttrIds(attrIds);
Set<Long> ids = new HashSet<>(searchIds);
List<SkuEsModel.Attr> searchAttrs = productAttrValueEntities.stream().filter(entity -> {
return ids.contains(entity.getAttrId());
}).map(entity -> {
SkuEsModel.Attr attr = new SkuEsModel.Attr();
BeanUtils.copyProperties(entity, attr);
return attr;
}).collect(Collectors.toList());
//TODO 1、发送远程调用,库存系统查询是否有库存
Map<Long, Boolean> stockMap = null;
try {
List<Long> longList = skuInfoEntities.stream().map(SkuInfoEntity::getSkuId).collect(Collectors.toList());
R r = wareFeignService.getSkuHasStock(longList);
// 封装R返回指定类型数据
TypeReference<List<SkuHasStockVo>> typeReference = new TypeReference<List<SkuHasStockVo>>(){};
List<SkuHasStockVo> data = r.getData(typeReference);
stockMap = data.stream().collect(Collectors.toMap(SkuHasStockVo::getSkuId, SkuHasStockVo::getHasStock));
}catch (Exception e){
log.error("远程调用库存服务失败,原因{}",e);
}
//2、封装每个sku的信息
Map<Long, Boolean> finalStockMap = stockMap;
List<SkuEsModel> skuEsModels = skuInfoEntities.stream().map(sku -> {
SkuEsModel skuEsModel = new SkuEsModel();
BeanUtils.copyProperties(sku, skuEsModel);
skuEsModel.setSkuPrice(sku.getPrice());
skuEsModel.setSkuImg(sku.getSkuDefaultImg());
//TODO 2、热度评分。0
skuEsModel.setHotScore(0L);
//TODO 3、查询品牌和分类的名字信息
BrandEntity brandEntity = brandService.getById(sku.getBrandId());
skuEsModel.setBrandName(brandEntity.getName());
skuEsModel.setBrandImg(brandEntity.getLogo());
CategoryEntity categoryEntity = categoryService.getById(sku.getCatalogId());
skuEsModel.setCatalogName(categoryEntity.getName());
//设置可搜索属性
skuEsModel.setAttrs(searchAttrs);
//设置是否有库存
skuEsModel.setHasStock(finalStockMap==null?false:finalStockMap.get(sku.getSkuId()));
return skuEsModel;
}).collect(Collectors.toList());
//TODO 5、将数据发给es进行保存:gulimall-search
R r = searchFeignService.saveProductAsIndices(skuEsModels);
if (r.getCode()==0){
// TODO 6、修改spu上架的状态
baseMapper.upSpuStatus(spuId, ProductConstant.SpuUpStatusEnum.SPU_UP.getCode());
}else {
log.error("商品远程es保存失败");
}
// TODO 接口幂等性:重试机制
}
2.search微服务
可视化界面建立索引,设置常量绑定索引
controller
package henu.soft.xiaosi.search.controller;
import henu.soft.common.exception.BizCodeEnume;
import henu.soft.common.to.es.SkuEsModel;
import henu.soft.common.utils.R;
import henu.soft.xiaosi.search.service.ProductSaveService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.List;
@Slf4j
@RestController
@RequestMapping("/search/save")
public class ElasticSearchSaveController {
@Autowired
private ProductSaveService productSaveService;
@PostMapping("/product")
public R saveProductAsIndices(@RequestBody List<SkuEsModel> skuEsModels) {
boolean status = false;
try {
status = productSaveService.saveProductAsIndices(skuEsModels);
} catch (Exception e) {
log.error("远程保存索引失败");
}
if (!status){
return R.ok();
}else {
return R.error(BizCodeEnume.PRODUCT_UP_EXCEPTION.getCode(), BizCodeEnume.PRODUCT_UP_EXCEPTION.getMsg());
}
}
}
package henu.soft.xiaosi.search.service.impl;
import com.alibaba.fastjson.JSON;
import henu.soft.common.to.es.SkuEsModel;
import henu.soft.xiaosi.search.constant.EsConstant;
import henu.soft.xiaosi.search.service.ProductSaveService;
import lombok.extern.slf4j.Slf4j;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.xcontent.XContentType;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
@Slf4j
@Service
public class ProductSaveServiceImpl implements ProductSaveService {
@Autowired
RestHighLevelClient restHighLevelClient;
/**
* 商品上级,检索信息存入ES
* @param skuEsModels
* @return
*/
@Override
public boolean saveProductAsIndices(List<SkuEsModel> skuEsModels) throws IOException {
// 保存到es
// 1. 给es建立索引 product 建立好映射关系,在可视化界面创建
// 2. 保存数据 参数 BulkRequest bulkRequest, RequestOptions options
BulkRequest bulkRequest = new BulkRequest();
for (SkuEsModel skuEsModel : skuEsModels) {
IndexRequest indexRequest = new IndexRequest(EsConstant.PRODUCT_INDEX);
indexRequest.id(skuEsModel.getSkuId().toString());
// 保存的内容转为json
String s = JSON.toJSONString(skuEsModel);
indexRequest.source(s, XContentType.JSON);
bulkRequest.add(indexRequest);
}
BulkResponse response = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
boolean existError = response.hasFailures();
List<String> Ids = Arrays.stream(response.getItems()).map(item -> {
return item.getId();
}).collect(Collectors.toList());
// TODO 批量保存是否存在错误
log.info("商品上架成功: {}",Ids);
return existError;
}
}
4.Feign源码
Feign远程调用的流程
- 首先进入ReflectiveFeign,判断调用的是不是equals()和hashCode()方法,不是的话dispatch进行真正的调用
- 然后进入SynchronousMethodhadler同步调用方法中,将传递的参数构造成一个RequestTemplate模板,参数底层会封装成json数据,然后到了远程的springmvc的@requestBody可以将json转化为实体
SynchronousMethodHandler源码
步骤
1. 构造请求数据,转化为json
2. 发送请求进行执行(捷星成功会解码响应数据)
3. 执行请求会有重试机制(默认关闭)
//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by Fernflower decompiler)
//
package feign;
import feign.InvocationHandlerFactory.MethodHandler;
import feign.Logger.Level;
import feign.Request.Options;
import feign.codec.Decoder;
import feign.codec.ErrorDecoder;
import java.io.IOException;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionException;
import java.util.concurrent.TimeUnit;
import java.util.stream.Stream;
final class SynchronousMethodHandler implements MethodHandler {
private static final long MAX_RESPONSE_BUFFER_SIZE = 8192L;
private final MethodMetadata metadata;
private final Target<?> target;
private final Client client;
private final Retryer retryer;
private final List<RequestInterceptor> requestInterceptors;
private final Logger logger;
private final Level logLevel;
private final feign.RequestTemplate.Factory buildTemplateFromArgs;
private final Options options;
private final ExceptionPropagationPolicy propagationPolicy;
private final Decoder decoder;
private final AsyncResponseHandler asyncResponseHandler;
private SynchronousMethodHandler(Target<?> target, Client client, Retryer retryer, List<RequestInterceptor> requestInterceptors, Logger logger, Level logLevel, MethodMetadata metadata, feign.RequestTemplate.Factory buildTemplateFromArgs, Options options, Decoder decoder, ErrorDecoder errorDecoder, boolean decode404, boolean closeAfterDecode, ExceptionPropagationPolicy propagationPolicy, boolean forceDecoding) {
this.target = (Target)Util.checkNotNull(target, "target", new Object[0]);
this.client = (Client)Util.checkNotNull(client, "client for %s", new Object[]{target});
this.retryer = (Retryer)Util.checkNotNull(retryer, "retryer for %s", new Object[]{target});
this.requestInterceptors = (List)Util.checkNotNull(requestInterceptors, "requestInterceptors for %s", new Object[]{target});
this.logger = (Logger)Util.checkNotNull(logger, "logger for %s", new Object[]{target});
this.logLevel = (Level)Util.checkNotNull(logLevel, "logLevel for %s", new Object[]{target});
this.metadata = (MethodMetadata)Util.checkNotNull(metadata, "metadata for %s", new Object[]{target});
this.buildTemplateFromArgs = (feign.RequestTemplate.Factory)Util.checkNotNull(buildTemplateFromArgs, "metadata for %s", new Object[]{target});
this.options = (Options)Util.checkNotNull(options, "options for %s", new Object[]{target});
this.propagationPolicy = propagationPolicy;
if (forceDecoding) {
this.decoder = decoder;
this.asyncResponseHandler = null;
} else {
this.decoder = null;
this.asyncResponseHandler = new AsyncResponseHandler(logLevel, logger, decoder, errorDecoder, decode404, closeAfterDecode);
}
}
// 执行方法
public Object invoke(Object[] argv) throws Throwable {
// 1.argv 被封装到 RequestTemplate 中
RequestTemplate template = this.buildTemplateFromArgs.create(argv);
Options options = this.findOptions(argv);
// 2. 重试机制,内部类
Retryer retryer = this.retryer.clone();
while(true) {
try {
// 2.1 远程执行请求,将结果解码
return this.executeAndDecode(template, options);
} catch (RetryableException var9) {
RetryableException e = var9;
try {
// 2.2 如果出现异常,重试器还会自动重试,但是默认配置不开启重试机制
retryer.continueOrPropagate(e);
} catch (RetryableException var8) {
Throwable cause = var8.getCause();
if (this.propagationPolicy == ExceptionPropagationPolicy.UNWRAP && cause != null) {
throw cause;
}
throw var8;
}
if (this.logLevel != Level.NONE) {
this.logger.logRetry(this.metadata.configKey(), this.logLevel);
}
}
}
}
// 远程执行并将结果解码
Object executeAndDecode(RequestTemplate template, Options options) throws Throwable {
Request request = this.targetRequest(template);
// 日志记录
if (this.logLevel != Level.NONE) {
this.logger.logRequest(this.metadata.configKey(), this.logLevel, request);
}
long start = System.nanoTime();
Response response;
try {
// 这里是开始真正的执行,LoadBalanceCilent客户端
response = this.client.execute(request, options);
response = response.toBuilder().request(request).requestTemplate(template).build();
} catch (IOException var13) {
if (this.logLevel != Level.NONE) {
this.logger.logIOException(this.metadata.configKey(), this.logLevel, var13, this.elapsedTime(start));
}
throw FeignException.errorExecuting(request, var13);
}
long elapsedTime = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
if (this.decoder != null) {
return this.decoder.decode(response, this.metadata.returnType());
} else {
CompletableFuture<Object> resultFuture = new CompletableFuture();
this.asyncResponseHandler.handleResponse(resultFuture, this.metadata.configKey(), response, this.metadata.returnType(), elapsedTime);
try {
if (!resultFuture.isDone()) {
throw new IllegalStateException("Response handling not done");
} else {
return resultFuture.join();
}
} catch (CompletionException var12) {
Throwable cause = var12.getCause();
if (cause != null) {
throw cause;
} else {
throw var12;
}
}
}
}
long elapsedTime(long start) {
return TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);
}
Request targetRequest(RequestTemplate template) {
Iterator var2 = this.requestInterceptors.iterator();
while(var2.hasNext()) {
RequestInterceptor interceptor = (RequestInterceptor)var2.next();
interceptor.apply(template);
}
return this.target.apply(template);
}
Options findOptions(Object[] argv) {
if (argv != null && argv.length != 0) {
Stream var10000 = Stream.of(argv);
Options.class.getClass();
var10000 = var10000.filter(Options.class::isInstance);
Options.class.getClass();
return (Options)var10000.map(Options.class::cast).findFirst().orElse(this.options);
} else {
return this.options;
}
}
static class Factory {
private final Client client;
private final Retryer retryer;
private final List<RequestInterceptor> requestInterceptors;
private final Logger logger;
private final Level logLevel;
private final boolean decode404;
private final boolean closeAfterDecode;
private final ExceptionPropagationPolicy propagationPolicy;
private final boolean forceDecoding;
Factory(Client client, Retryer retryer, List<RequestInterceptor> requestInterceptors, Logger logger, Level logLevel, boolean decode404, boolean closeAfterDecode, ExceptionPropagationPolicy propagationPolicy, boolean forceDecoding) {
this.client = (Client)Util.checkNotNull(client, "client", new Object[0]);
this.retryer = (Retryer)Util.checkNotNull(retryer, "retryer", new Object[0]);
this.requestInterceptors = (List)Util.checkNotNull(requestInterceptors, "requestInterceptors", new Object[0]);
this.logger = (Logger)Util.checkNotNull(logger, "logger", new Object[0]);
this.logLevel = (Level)Util.checkNotNull(logLevel, "logLevel", new Object[0]);
this.decode404 = decode404;
this.closeAfterDecode = closeAfterDecode;
this.propagationPolicy = propagationPolicy;
this.forceDecoding = forceDecoding;
}
public MethodHandler create(Target<?> target, MethodMetadata md, feign.RequestTemplate.Factory buildTemplateFromArgs, Options options, Decoder decoder, ErrorDecoder errorDecoder) {
return new SynchronousMethodHandler(target, this.client, this.retryer, this.requestInterceptors, this.logger, this.logLevel, md, buildTemplateFromArgs, options, decoder, errorDecoder, this.decode404, this.closeAfterDecode, this.propagationPolicy, this.forceDecoding);
}
}
}
5.封装消息返回R细节问题
/**
* Copyright (c) 2016-2019 人人开源 All rights reserved.
*
* https://www.renren.io
*
* 版权所有,侵权必究!
*/
package henu.soft.common.utils;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.TypeReference;
import org.apache.http.HttpStatus;
import java.util.HashMap;
import java.util.Map;
/**
* 返回数据
*
* @author Mark sunlightcs@gmail.com
*/
public class R extends HashMap<String, Object> {
private static final long serialVersionUID = 1L;
// 自定义方法,便于商品上架携带参数
public R setData(Object data){
put("data",data);
return this;
}
public <T> T getData(TypeReference<T> typeReference){
Object data = get("data");
String s = JSON.toJSONString(data);
T t = JSON.parseObject(s, typeReference);
return t;
}
public R() {
put("code", 0);
put("msg", "success");
}
public static R error() {
return error(HttpStatus.SC_INTERNAL_SERVER_ERROR, "未知异常,请联系管理员");
}
public static R error(String msg) {
return error(HttpStatus.SC_INTERNAL_SERVER_ERROR, msg);
}
public static R error(int code, String msg) {
R r = new R();
r.put("code", code);
r.put("msg", msg);
return r;
}
public static R ok(String msg) {
R r = new R();
r.put("msg", msg);
return r;
}
public static R ok(Map<String, Object> map) {
R r = new R();
r.putAll(map);
return r;
}
public static R ok() {
return new R();
}
public R put(String key, Object value) {
super.put(key, value);
return this;
}
public Integer getCode() {
return (Integer) this.get("code");
}
}
使用
//TODO 1、发送远程调用,库存系统查询是否有库存
Map<Long, Boolean> stockMap = null;
try {
List<Long> longList = skuInfoEntities.stream().map(SkuInfoEntity::getSkuId).collect(Collectors.toList());
R r = wareFeignService.getSkuHasStock(longList);
// 封装R返回指定类型数据
TypeReference<List<SkuHasStockVo>> typeReference = new TypeReference<List<SkuHasStockVo>>(){};
List<SkuHasStockVo> data = r.getData(typeReference);
stockMap = data.stream().collect(Collectors.toMap(SkuHasStockVo::getSkuId, SkuHasStockVo::getHasStock));
}catch (Exception e){
log.error("远程调用库存服务失败,原因{}",e);
}