首页 > 其他分享 >使用CNN做文本分类——将图像2维卷积换成1维

使用CNN做文本分类——将图像2维卷积换成1维

时间:2023-05-31 15:05:12浏览次数:41  
标签:loss 22500 network conv 卷积 tflearn CNN import 文本

使用CNN做文本分类

    from __future__ import division, print_function, absolute_import
    import tensorflow as tf
    import tflearn
    from tflearn.layers.core import input_data, dropout, fully_connected
    from tflearn.layers.conv import conv_1d, global_max_pool
    from tflearn.layers.merge_ops import merge
    from tflearn.layers.estimator import regression
    from tflearn.data_utils import to_categorical, pad_sequences
    from tflearn.datasets import imdb
    import pickle
    import numpy as np
    """
    还是加载imdb.pkl数据
    """
    train, test, _ = imdb.load_data(path='imdb.pkl', n_words=10000,
                                    valid_portion=0.1)
    trainX, trainY = train
    testX, testY = test
    """
    转化为固定长度的向量,这里固定长度为100
    """
    trainX = pad_sequences(trainX, maxlen=100, value=0.)
    testX = pad_sequences(testX, maxlen=100, value=0.)
    """
    二值化向量
    """
    trainY = to_categorical(trainY, nb_classes=2)
    testY = to_categorical(testY, nb_classes=2)
    """
    构建卷积神经网络,这里卷积神经网网络为1d卷积
    """
    network = input_data(shape=[None, 100], name='input')
    network = tflearn.embedding(network, input_dim=10000, output_dim=128)
    branch1 = conv_1d(network, 128, 3, padding='valid', activation='relu', regularizer="L2")
    branch2 = conv_1d(network, 128, 4, padding='valid', activation='relu', regularizer="L2")
    branch3 = conv_1d(network, 128, 5, padding='valid', activation='relu', regularizer="L2")
    network = merge([branch1, branch2, branch3], mode='concat', axis=1)
    network = tf.expand_dims(network, 2)
    network = global_max_pool(network)
    network = dropout(network, 0.5)
    network = fully_connected(network, 2, activation='softmax')
    network = regression(network, optimizer='adam', learning_rate=0.001,
                         loss='categorical_crossentropy', name='target')
    """
    训练开始
    """
    model = tflearn.DNN(network, tensorboard_verbose=0)
    model.fit(trainX, trainY, n_epoch = 1, shuffle=True, validation_set=(testX, testY), show_metric=True, batch_size=32)
    """
    模型保存
    """
    model.save("cnn.model")
    """
    做测试使用
    """
    test=np.linspace(1,101,100).reshape(1,100)
    print("测试结果:",model.predict(test))

模型训练结果以及模型保存情况:

    Training Step: 697  | total loss: 0.40838 | time: 79.960s
    | Adam | epoch: 001 | loss: 0.40838 - acc: 0.8247 -- iter: 22304/22500
    Training Step: 698  | total loss: 0.39128 | time: 80.112s
    | Adam | epoch: 001 | loss: 0.39128 - acc: 0.8329 -- iter: 22336/22500
    Training Step: 699  | total loss: 0.38896 | time: 80.298s
    | Adam | epoch: 001 | loss: 0.38896 - acc: 0.8402 -- iter: 22368/22500
    Training Step: 700  | total loss: 0.39468 | time: 80.456s
    | Adam | epoch: 001 | loss: 0.39468 - acc: 0.8343 -- iter: 22400/22500
    Training Step: 701  | total loss: 0.39380 | time: 80.640s
    | Adam | epoch: 001 | loss: 0.39380 - acc: 0.8353 -- iter: 22432/22500
    Training Step: 702  | total loss: 0.38980 | time: 80.787s
    | Adam | epoch: 001 | loss: 0.38980 - acc: 0.8392 -- iter: 22464/22500
    Training Step: 703  | total loss: 0.39020 | time: 80.970s
    | Adam | epoch: 001 | loss: 0.39020 - acc: 0.8397 -- iter: 22496/22500
    Training Step: 704  | total loss: 0.38543 | time: 82.891s
    | Adam | epoch: 001 | loss: 0.38543 - acc: 0.8370 | val_loss: 0.44625 - val_acc: 0.7880 -- iter: 22500/22500
    --
    测试结果: [[ 0.77064246  0.2293576 ]]




加载模型并做预测:

    import tensorflow as tf
    import numpy as np
    import tflearn
    from tflearn.layers.core import input_data, dropout, fully_connected
    from tflearn.layers.conv import conv_1d, global_max_pool
    from tflearn.layers.merge_ops import merge
    from tflearn.layers.estimator import regression
    """
    跟训练模型的网络结构一样
    """
    network = input_data(shape=[None, 100], name='input')
    network = tflearn.embedding(network, input_dim=10000, output_dim=128)
    branch1 = conv_1d(network, 128, 3, padding='valid', activation='relu', regularizer="L2")
    branch2 = conv_1d(network, 128, 4, padding='valid', activation='relu', regularizer="L2")
    branch3 = conv_1d(network, 128, 5, padding='valid', activation='relu', regularizer="L2")
    network = merge([branch1, branch2, branch3], mode='concat', axis=1)
    network = tf.expand_dims(network, 2)
    network = global_max_pool(network)
    network = dropout(network, 0.5)
    network = fully_connected(network, 2, activation='softmax')
    network = regression(network, optimizer='adam', learning_rate=0.001,
                         loss='categorical_crossentropy', name='target')
    """
    加载模型做预测
    """
    model = tflearn.DNN(network)
    model.load("cnn.model")
    test=np.linspace(1,101,100).reshape(1,100)
    # Predict  [[ 0.7725634   0.22743654]]
    prediction = model.predict(test)
    print("模型预测结果",prediction)



结果:

    2017-10-15 19:35:14.940689: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
    模型预测结果 [[ 0.77064246  0.2293576 ]]
    Process finished with exit code 0



基于tflearn高阶api怎么做文本分类基本上完成

 

标签:loss,22500,network,conv,卷积,tflearn,CNN,import,文本
From: https://blog.51cto.com/u_11908275/6386900

相关文章

  • 使用神经网络-垃圾邮件检测-LSTM或者CNN(一维卷积)效果都不错【代码有问题,pass】
     fromsklearn.feature_extraction.textimportCountVectorizerimportosfromsklearn.naive_bayesimportGaussianNBfromsklearn.model_selectionimporttrain_test_splitfromsklearnimportmetricsimportmatplotlib.pyplotaspltimportnumpyasnpfromskle......
  • C#生成文本文档
    生成word文档1、添加引用:MicrosoftOffice16.0ObjectLibrary2、若出现这种情况添加该引用:Microsoft.Office.Interop.Word.dll 3、ApplicationClass()不可用更改Microsoft.Office.Interop.Word引用的嵌入互操作类型为false4、操作(插入文本、图片、表格)https://www.......
  • linux 中sed命令实现文本的大小写转换
     001、将所有的小写字母转换为大写[root@PC1test4]#lsa.txt[root@PC1test4]#cata.txt##测试数据abdmnjuyrXDETHRQYEcvbDdggyi[root@PC1test4]#sed's/[a-z]/\U&/g'a.txt##所有小写字母转换为大写ABDMNJUYRXDETHRQYECVB......
  • FFT——快速处理卷积
    前置知识卷积符号为\(*\)。设多项式\(A(x)=a_0+a_1x+a_2x^2+\cdots+a_nx^n,B(x)=b_0+b_1x_1+b_2x^2+\cdots+b_nx^n\),则有\[(A*B)[n]=\sum_{i=0}^nA(i)\timesB(n-i)\]即\((A*B)[n]\)的意义是将两个多项式相乘后\(n\)次项的系数。单......
  • 在树莓派上实现numpy的conv2d卷积神经网络做图像分类,加载pytorch的模型参数,推理mnist
    这几天又在玩树莓派,先是搞了个物联网,又在尝试在树莓派上搞一些简单的神经网络,这次搞得是卷积识别mnist手写数字识别训练代码在电脑上,cpu就能训练,很快的:importtorchimporttorch.nnasnnimporttorch.optimasoptimfromtorchvisionimportdatasets,transformsimportn......
  • 兼容IE,Chrome 文本控制显示三行
    谷歌浏览器得行数控制不兼容ie,加个高度限制解决。(max-height:66px;)css:.txt{display:block;height:auto;max-height:66px;overflow:hidden;text-overflow:ellipsis;word-wrap:break-word;white-space:normal!important;-webkit-line-clamp:3;......
  • CSS 删除线:在 CSS 中使用文本装饰和划线
    CSS删除线是一个CSS属性,它使文本看起来像是被删除线一样,就像这样。在网络开发和写作中,这经常用于表示文本已被删除或不再相关。但它也可以用于不同的事情。删除线可以应用于span元素、段落、div、显示内联块或任何其他需要文本修饰的元素。除了下划线、斜体和粗体,CSS删除线也......
  • 记录一次ScrollViewer控件 经过大量文本数据卡顿的原因
     在WPF中,CanContentScroll是ScrollViewer控件的一个附加属性,它控制滚动视图中的内容是否按项或像素来滚动。当CanContentScroll设置为false时,表示ScrollViewer控件使用逐像素的滚动方式,这意味着滚动视图中的内容会以像素为单位进行滚动。在这种情况下,如果您需要展示......
  • css实现文本超出固定行数显示...和展开收起
    文本超出固定行数显示...和展开收起项目中有时需要实现文字超出末尾显示…和展开收起按钮的需求,在我用js限制字符数实现之后,又去找了大佬用css实现的方法,发现这样更满足我的需求且更简单。思路:float可以实现文字环绕效果判断展开收起的状态,可以使用复选框和伪元素结合实现复选......
  • linux常用指令(文本编辑)
    (1).vim 安装vimyuminstallvim命令命令模式--vim文件名字或者编辑模式按esc进入i--在光标的前面插入字符a--在光标的后面添加入字符o--在光标下一行插入字符编辑模式--命令行模式按i进入yy--复制当前行p--粘贴dd--删除当前行......