Python 中文分词
结巴中文分词
https://github.com/fxsjy/jieba
安装
pip install jieba pip install paddlepaddle
20.5.1. 分词演示
# encoding=utf-8 import jieba import paddle paddle.enable_static() jieba.enable_paddle() # 启动paddle模式。 strs = ["我来到北京清华大学", "乒乓球拍卖完了", "中国科学技术大学"] for str in strs: seg_list = jieba.cut(str, use_paddle=True) # 使用paddle模式 print("Paddle Mode: " + '/'.join(list(seg_list))) seg_list = jieba.cut("我来到北京清华大学", cut_all=True) print("Full Mode: " + "/ ".join(seg_list)) # 全模式 seg_list = jieba.cut("我来到北京清华大学", cut_all=False) print("Default Mode: " + "/ ".join(seg_list)) # 精确模式 seg_list = jieba.cut("他来到了网易杭研大厦") # 默认是精确模式 print(", ".join(seg_list)) seg_list = jieba.cut_for_search("小明硕士毕业于中国科学院计算所,后在日本京都大学深造") # 搜索引擎模式 print(", ".join(seg_list))
20.5.2. 日志设置
import jieba import logging logger = logging.getLogger() # 配置 logger 禁止输出无用的信息 jieba.default_logger = logger text = "他来到了网易杭研大厦" words = jieba.cut(text) print(", ".join(words)) print("-" * 50) # 将 “杭研大厦”,“他来到了” 词频优先 jieba.suggest_freq('杭研大厦', True) jieba.suggest_freq('他来到了', True) words = jieba.cut(text) print(", ".join(words))标签:jieba,cut,join,Python,list,中文,seg,print,分词 From: https://blog.csdn.net/u010604770/article/details/141969957