首页 > 其他分享 >jieba 分词-红楼梦

jieba 分词-红楼梦

时间:2023-12-28 22:22:49浏览次数:45  
标签:jieba word items 红楼梦 counts txt 分词

import jieba
excludes = {"什么","一个","我们","那里","你们","如今","说道","知道","起来","姑娘","这里","出来","他们","众人","自己",
"一面","只见","怎么","两个","没有","不是","不知","这个","听见","这样","进来","咱们","告诉","就是",
"东西","袭人","回来","只是","大家","只得","老爷","丫头","这些","不敢","出去","所以","不过","的话","不好",
"姐姐","探春","鸳鸯","一时","不能","过来","心里","如此","今日","银子","几个","答应","二人","还有","只管",
"这么","说话","一回","那边","这话","外头","打发","自然","今儿","罢了","屋里","那些","听说","小丫头","不用","如何"}

txt = open("红楼梦.txt","r",encoding='utf-8').read()

words = jieba.lcut(txt)

counts = {}

for word in words:
if len(word) == 1: #如果长度是一,可能是语气词之类的,应该删除掉
continue
else:
counts[word] = counts.get(word,0) + 1

for word in excludes:
del(counts[word])

items = list(counts.items())

items.sort(key=lambda x:x[1],reverse = True)

for i in range(20):
word,count = items[i]
print("{0:<10}{1:>5}".format(word,count))

 

标签:jieba,word,items,红楼梦,counts,txt,分词
From: https://www.cnblogs.com/fmhqq/p/17933714.html

相关文章

  • jieba 分词
    jieba分词:importjiebawithopen("C:\\Users\\86133\\Desktop\\liaozhai.txt","r",encoding='utf_8')asf:words=jieba.lcut(f.read())counts={}forwordinwords:iflen(word)==1:continueeli......
  • 聊斋jieba库
    importjiebaprint("0217向悦")#读取文本文件path="聊斋志异.txt"file=open(path,"r",encoding="utf-8")text=file.read()file.close()#使用jieba分词words=jieba.lcut(text)#统计词频counts={}forwordinwords:#过滤掉长度为1的词语......
  • 红楼梦jieba分词
    importjiebaexcludes={"什么","一个","我们","那里","你们","如今","说道","知道","起来","姑娘","这里","出来","他们","众人","自己",&quo......
  • jieba分词 红楼梦相关分词
    importjiebatext=open('C:\Users\李文卓\Desktop\xn\红楼梦.txt',"r",encoding='utf-8').read()words=jieba.lcut(text)counts={}forwordinwords:iflen(word)==1:#排除带个字符的分词效果continueelse:counts[word]=counts.get(word,0)+1it......
  • jieba分词
    ```importjieba#读取文本文件path="西游记.txt"file=open(path,"r",encoding="utf-8")text=file.read()file.close()#使用jieba分词words=jieba.lcut(text)#统计词频counts={}forwordinwords:#过滤掉长度为1的词语iflen(word......
  • 西游记jieba分词
    引用jiaba库点击查看代码importjieba读取文件,文件路径填写文件放取的位置并且使用jieba分词的精确模式点击查看代码txt=open('西游记.txt','r',encoding='utf-8').read()words=jieba.lcut(txt)count={}#通过键值对的形式存储词语及其出现的次数将同一人......
  • jieba西游记
    importjiebawithopen('E:\西游记.txt','r',encoding='utf-8')asf:#打开文件txt=f.read()#读取为txtwords=jieba.lcut(txt)#利用jieba库的lcut分词counts={}#创建字典forwordinwords:#逐个遍历iflen(word)==1:#对于......
  • jieba分词 | 西游记相关分词,出现次数最高的20个。
    代码1importjieba23txt=open("《西游记》.txt","r",encoding='utf-8').read()45words=jieba.lcut(txt)#使用精确模式对文本进行分词67counts={}#通过键值对的形式存储词语及其出现的次数89forwordinwords:10iflen(word)==......
  • jieba 分词
    尾号为7,8,9,0的同学做,聊斋相关的分词,出现次数最高的20个。#-*-coding:utf-8-*-"""CreatedonSatDec2318:00:492023@author:86135"""importjieba#读取文本文件path="C:\\Users\\86135\\Desktop\\聊斋.txt"file=open(path,&q......
  • jieba分词
    importjiebatxt=open("D:\python-learn\lianxi\聊斋志异.txt","r",encoding='utf-8').read()words=jieba.lcut(txt)counts={}forwordinwords:iflen(word)==1:continueelse:counts[word]=count......