温馨提示:文末有 CSDN 平台官方提供的学长联系方式的名片!
温馨提示:文末有 CSDN 平台官方提供的学长联系方式的名片!
温馨提示:文末有 CSDN 平台官方提供的学长联系方式的名片!
开发技术:
- 前端:vue.js echarts D3.js
- 后端:Flask/Django
- 机器学习/深度学习:LSTM情感分析模型、PyTorch、Tensorflow、阿里千问大模型精调、chatgpt、卷积神经网络CNN/RNN
- 爬虫:drssionpage框架(全新技术,反爬强悍)
- 数据库:mysql关系型数据库、neo4j图数据库、mongodb
吊打答辩现场的要点总结:
1-百万数据爬虫
2-大模型的应用
3-智能问答
4-深度学习模型训练优化:LSTM、PyTorch、Tensorflow、卷积神经网络CNN/RNN
5-Neo4J知识图谱显摆
6-自动即兴发挥写诗
7-可视化大屏
8-情感分析
核心算法代码分享如下:
from 古诗生成.wu_poem.test_pome import generate_poetry_auto,train_vec,cang
from 古诗生成.qi_poem.test_pome import generate_poetry_auto as qi_generate_poetry_auto,train_vec as qi_train_vec,cang as qi_cang
import torch
import torch.nn as nn
import numpy as np
from gensim.models.word2vec import Word2Vec
import pickle
import os
class Mymodel(nn.Module):
def __init__(self,embedding_num,hidden_num,word_size):
super(Mymodel, self).__init__()
self.embedding_num=embedding_num
self.hidden_num = hidden_num
self.word_size = word_size
#num_layer:两层,代表层数,出来后的维度[5,31,64],设置hidden_num=64
self.lstm=nn.LSTM(input_size=embedding_num,hidden_size=hidden_num,batch_first=True,num_layers=2,bidirectional=False)
#做一个随机失活,防止过拟合,同时可以保持生成的古诗不唯一
self.dropout=nn.Dropout(0.3)
#做一个flatten,将维度合并【5*31,64】
self.flatten=nn.Flatten(0,1)
#加一个线性层:[64,词库大小]
self.linear=nn.Linear(hidden_num,word_size)
#交叉熵
self.cross_entropy=nn.CrossEntropyLoss()
def forward(self,xs_embedding,h_0=None,c_0=None):
device = "cuda" if torch.cuda.is_available() else "cpu"
xs_embedding=xs_embedding.to(device)
if h_0==None or c_0==None:
#num_layers,batch_size,hidden_size
h_0=torch.tensor(np.zeros((2,xs_embedding.shape[0],self.hidden_num),np.float32))
c_0 = torch.tensor(np.zeros((2, xs_embedding.shape[0], self.hidden_num),np.float32))
h_0=h_0.to(device)
c_0=c_0.to(device)
hidden,(h_0,c_0)=self.lstm(xs_embedding,(h_0,c_0))
hidden_drop=self.dropout(hidden)
flatten_hidden=self.flatten(hidden_drop)
pre=self.linear(flatten_hidden)
return pre,(h_0,c_0)
标签:embedding,Python,self,num,毕业设计,import,hidden,size,古诗词
From: https://blog.csdn.net/spark2022/article/details/143088917