首页 > 其他分享 >分词

分词

时间:2023-12-28 22:34:31浏览次数:26  
标签:count cut word paixu words txt 分词

import jieba txt = open("红楼梦.txt","r",encoding = 'UTF-8').read() words = jieba.lcut(txt) count = {} for word in words:     if len(word) ==1:         continue     else:         count[word] =count.get(word,0) +1         cut = ['什么','一个','我们','那里','你们','如今','说道','知道','起来','姑娘','这里','出来',     '他们','众人','自己','一面','只见','两个','没有','怎么','不是','不知','这个','听见', '这样','进来','东西','告诉','就是','咱们','回来','大家','只是','只得','这些','不敢', '丫头','出去','所以','不过','的话','不好','鸳鸯','探春','一时','不能','过来','心里', '银子','如此','今日','几个','二人','答应','还有','只管','说话','这么','一回','那边'] for j in range(60):     del count[cut[j]]     paixu = list(count.items())     paixu.sort(key=lambda word:word[1],reverse = True) for i in range(20):     word,counts = paixu[i]     print("{0:<5} {1:>5}".format(word,counts))

标签:count,cut,word,paixu,words,txt,分词
From: https://www.cnblogs.com/wubianxuyu/p/17933728.html

相关文章

  • jieba 分词-红楼梦
    importjiebaexcludes={"什么","一个","我们","那里","你们","如今","说道","知道","起来","姑娘","这里","出来","他们","众人","自己",......
  • jieba 分词
    jieba分词:importjiebawithopen("C:\\Users\\86133\\Desktop\\liaozhai.txt","r",encoding='utf_8')asf:words=jieba.lcut(f.read())counts={}forwordinwords:iflen(word)==1:continueeli......
  • 红楼梦jieba分词
    importjiebaexcludes={"什么","一个","我们","那里","你们","如今","说道","知道","起来","姑娘","这里","出来","他们","众人","自己",&quo......
  • jieba分词 红楼梦相关分词
    importjiebatext=open('C:\Users\李文卓\Desktop\xn\红楼梦.txt',"r",encoding='utf-8').read()words=jieba.lcut(text)counts={}forwordinwords:iflen(word)==1:#排除带个字符的分词效果continueelse:counts[word]=counts.get(word,0)+1it......
  • jieba分词
    ```importjieba#读取文本文件path="西游记.txt"file=open(path,"r",encoding="utf-8")text=file.read()file.close()#使用jieba分词words=jieba.lcut(text)#统计词频counts={}forwordinwords:#过滤掉长度为1的词语iflen(word......
  • 西游记jieba分词
    引用jiaba库点击查看代码importjieba读取文件,文件路径填写文件放取的位置并且使用jieba分词的精确模式点击查看代码txt=open('西游记.txt','r',encoding='utf-8').read()words=jieba.lcut(txt)count={}#通过键值对的形式存储词语及其出现的次数将同一人......
  • jieba分词 | 西游记相关分词,出现次数最高的20个。
    代码1importjieba23txt=open("《西游记》.txt","r",encoding='utf-8').read()45words=jieba.lcut(txt)#使用精确模式对文本进行分词67counts={}#通过键值对的形式存储词语及其出现的次数89forwordinwords:10iflen(word)==......
  • jieba 分词
    尾号为7,8,9,0的同学做,聊斋相关的分词,出现次数最高的20个。#-*-coding:utf-8-*-"""CreatedonSatDec2318:00:492023@author:86135"""importjieba#读取文本文件path="C:\\Users\\86135\\Desktop\\聊斋.txt"file=open(path,&q......
  • jieba分词
    importjiebatxt=open("D:\python-learn\lianxi\聊斋志异.txt","r",encoding='utf-8').read()words=jieba.lcut(txt)counts={}forwordinwords:iflen(word)==1:continueelse:counts[word]=count......
  • jieba分词
    importjieba#读取文本文件path="红楼梦.txt"file=open(path,"r",encoding="GB2312",errors="ignore")text=file.read()file.close()#使用jieba分词words=jieba.lcut(text)#统计词频counts={}forwordinwords:#过滤掉长度为1的词语iflen......