首页 > 其他分享 >jieba库

jieba库

时间:2023-12-28 23:11:21浏览次数:30  
标签:jieba word items 词频 file counts

```import jieba

# 读取文本文件
path = "红楼梦.txt"
file = open(path, "r", encoding="utf-8")
text = file.read()
file.close()

# 使用jieba分词
words = jieba.lcut(text)

# 统计词频
counts = {}
for word in words:
# 过滤掉长度为1的词语
if len(word) == 1:
continue
# 更新字典中的词频
counts[word] = counts.get(word, 0) + 1

# 对字典中的键值对进行排序
items = list(counts.items())
items.sort(key=lambda x: x[1], reverse=True)

# 输出前20个高频词语
for i in range(20):
word, count = items[i]
print(f"{word:<10}{count:>5}")

标签:jieba,word,items,词频,file,counts
From: https://www.cnblogs.com/xizhao-xizhao/p/17933796.html

相关文章

  • 红楼梦jieba 分词
    importjiebatxt=open("D:\pycharm\python123\jieba分词作业\红楼梦.txt","r",encoding='utf-8').read()words=jieba.lcut(txt)#精确模式进行分词count={}#创建空字典forwordinwords:iflen(w......
  • 西游记jieba分词
    importjiebatxt=open("西游记.txt","r",encoding='utf-8').read()words=jieba.lcut(txt)#使用精确模式对文本进行分词counts={}#通过键值对的形式存储词语及其出现的次数forwordinwords:iflen(word)==1:continueelifwordin......
  • jieba 分词-红楼梦
    importjiebaexcludes={"什么","一个","我们","那里","你们","如今","说道","知道","起来","姑娘","这里","出来","他们","众人","自己",......
  • jieba 分词
    jieba分词:importjiebawithopen("C:\\Users\\86133\\Desktop\\liaozhai.txt","r",encoding='utf_8')asf:words=jieba.lcut(f.read())counts={}forwordinwords:iflen(word)==1:continueeli......
  • 聊斋jieba库
    importjiebaprint("0217向悦")#读取文本文件path="聊斋志异.txt"file=open(path,"r",encoding="utf-8")text=file.read()file.close()#使用jieba分词words=jieba.lcut(text)#统计词频counts={}forwordinwords:#过滤掉长度为1的词语......
  • 红楼梦jieba分词
    importjiebaexcludes={"什么","一个","我们","那里","你们","如今","说道","知道","起来","姑娘","这里","出来","他们","众人","自己",&quo......
  • jieba分词 红楼梦相关分词
    importjiebatext=open('C:\Users\李文卓\Desktop\xn\红楼梦.txt',"r",encoding='utf-8').read()words=jieba.lcut(text)counts={}forwordinwords:iflen(word)==1:#排除带个字符的分词效果continueelse:counts[word]=counts.get(word,0)+1it......
  • jieba分词
    ```importjieba#读取文本文件path="西游记.txt"file=open(path,"r",encoding="utf-8")text=file.read()file.close()#使用jieba分词words=jieba.lcut(text)#统计词频counts={}forwordinwords:#过滤掉长度为1的词语iflen(word......
  • 西游记jieba分词
    引用jiaba库点击查看代码importjieba读取文件,文件路径填写文件放取的位置并且使用jieba分词的精确模式点击查看代码txt=open('西游记.txt','r',encoding='utf-8').read()words=jieba.lcut(txt)count={}#通过键值对的形式存储词语及其出现的次数将同一人......
  • jieba西游记
    importjiebawithopen('E:\西游记.txt','r',encoding='utf-8')asf:#打开文件txt=f.read()#读取为txtwords=jieba.lcut(txt)#利用jieba库的lcut分词counts={}#创建字典forwordinwords:#逐个遍历iflen(word)==1:#对于......