首页 > 其他分享 >COMP6685 Deep Learning

COMP6685 Deep Learning

时间:2024-07-10 15:55:16浏览次数:22  
标签:set RNN testing Deep should code Learning marks COMP6685

COMP6685 Deep Learning

RETRIEVAL ASSESSMENT

INDIVIDUAL (100% of total mark)

Deliverables:                      1x Jupyter notebook

Task: You are required to develop a phyton code using TensorFlow (Keras) with additional comments to answer the question in the next section. Your code should be able to run on CPUs.

Create a code, in the provided template in Moodle, to train a Recurrent Neural Network (RNN) on the public benchmark dataset named Poker Handavailable at https://archive.ics.uci.edu/ml/datasets/Poker+Hand .

Poker Hand dataset is composed of one training set named “poker-hand- training-true.data” and one testing set named “poker-hand-testing.data” .

You will need to download both training and testing sets into your local disk by clicking the Download hyperlink (in the top right button of the page).

In Poker Hand dataset, each data sample (row) is an example of a hand  consisting of five playing cards drawn from a standard deck of 52. Each card is described using two attributes (suit and rank), for a total of 10 predictive attributes. There is one Class attribute that describes the "Poker Hand". You can find more information about this dataset from:

https://www.kaggle.com/datasets/rasvob/uci-poker-hand-dataset

The dataset should be imported in the code. An example on how to import the dataset to your code can be found from the link below:

https://www.kaggle.com/code/rasvob/uci-poker-dataset-classification

In this assignment, you are required to implement a single vanilla RNN (not

LSTM nor GRU) and add a comment in each of the parameters chosen. The

RNN should be trained with the training set and its performance should be evaluated on the testing set.

You can determine the setting of the RNN (including, the number of layers, number of recurrent neurons in each layer, regularization, dropout, optimiser, activation function, learning rate, etc.) according to your own preference. However, it is important that the RNN can achieve good classification performance in terms of accuracy on the testing set after being trained on the training set for no more than 40 epochs.

An acceptable classification accuracy rate on the testing set should be above 65%, namely, more than 65% of the testing data samples are correctly classified by the RNN model. You are also required to present the confusion matrix along with the classification accuracy as the final prediction result.

All main settings should be commented in the line code. The output of each code block and the training progresses of the RNN models should be kept in the submitted jupyter notebook file. A question about final remarks on the results will be answered on the markdown defined in the template.

Submission:

•    by Moodle within the deadline of Monday, 5th August 2024, before the cutoff at 23.55

•   Submit only a jupyter notebook file. Use the template provided. The comments should be included in the file as comments in code or in the markdown space allocated.

•   Your jupyter notebook file name should include your Student ID, Name

Marking Scheme (100 marks for the assessment that corresponds to 25% of the total mark of the module):

•    Importing the dataset (both training set and testing set). (10 marks)

•   Correct definition and implementation of the RNN; (20 marks)

•   Training of the RNN on the training set (10 marks)

•    Evaluate the model on the testing set (10 marks)

•   Acceptable  classification  accuracy  on the testing set with confusion matrix presented (20 marks)

•   Code outline, including useful comments in the code (10 marks)

•   Code running without errors (10 marks)

•    Final remarks/conclusions on the obtained results and ideas for further improvement of the accuracy (10 marks)

 

标签:set,RNN,testing,Deep,should,code,Learning,marks,COMP6685
From: https://www.cnblogs.com/qq-99515681/p/18293866

相关文章

  • 深度学习第二课 Practical Aspect of Deep learning
    PracticalAspectofDeeplearningweek1深度学习的实用层面1.1训练/开发/测试集在机器学习发展的小数据量时代,常见做法是将所有数据三七分,就是人们常说的70%验证集,30%测试集,如果没有明确设置验证集,也可以按照60%训练,20%验证和20%测试集来划分。这是前几年机器学习领域普遍......
  • 读论文《OSCNet: Orientation-Shared ConvolutionalNetwork for CT Metal Artifact Le
    论文题目:面向共享的CT金属伪影学习卷积网络论文主题:金属伪影去噪论文地址:OSCNet_TMI2023.pdf-Google云端硬盘这个是oscnet+,oscnet进阶版,感觉和acdnet很像其实,本文读论文,下一篇博客讲讲复现摘要:        在本文中,我们仔细研究了具有旋转对称条纹图案的金属工件......
  • DeepViT:字节提出深层ViT的训练策略 | 2021 arxiv
    作者发现深层ViT出现的注意力崩溃问题,提出了新颖的Re-attention机制来解决,计算量和内存开销都很少,在增加ViT深度时能够保持性能不断提高来源:晓飞的算法工程笔记公众号论文:DeepViT:TowardsDeeperVisionTransformer论文地址:https://arxiv.org/abs/2103.11886论文代码......
  • 强化学习(Reinforcement Learning,简称RL)
    强化学习(ReinforcementLearning,简称RL)是一种机器学习范式,它允许智能体(agent)通过与环境互动来学习如何采取行动,以最大化某种累积奖励。在机器人控制中,强化学习可以用来解决各种复杂的问题,如运动规划、动态平衡、抓取和操纵物体等。下面是一些关键概念和步骤,说明如何使用强化......
  • FD-Align: Feature Discrimination Alignment for Fine-tuning Pre-Trained Models in
    文章汇总动机CLIP注意图更关注背景,全面微调后的CLIP关注在了非显著特征的地方。FD-Align注意图倾向于关注标签相关的信息。解决办法总损失有两个损失函数组成:对VisualEncoder进行微调冻结CLIP的文本编码器g......
  • 处理报错deepspeed使用trainer object.__init__() takes exactly one argument (the i
    项目场景:在kaggle上结合deepspeed使用trainer问题描述报错TypeError:object.init()takesexactlyoneargument(theinstancetoinitialize)具体如下:File/opt/conda/lib/python3.10/site-packages/transformers/training_args.py:1934,inTrainingArguments.__......
  • Fundamentals of Machine Learning for Predictive Data Analytics Algorithms, Worke
    主要内容:本书介绍了机器学习在预测数据分析中的基本原理、算法、实例和案例研究,涵盖了从数据到决策的整个过程。书中涉及机器学习项目生命周期的各个方面,包括数据准备、特征设计和模型部署。结构:本书分为五个部分,共计14章和若干附录:引言(IntroductiontoMachineLearn......
  • 6CCS3ML1 Machine Learning
    6CCS3ML1 (Machine Learning)Coursework 1(Version 1.5)1 OverviewFor this coursework, you will have to implement a classifier. You will use this classifier in some code that has to make a decision.  The code will be controll......
  • LAMM(论文解读): Label Alignment for Multi-Modal Prompt Learning
    摘要随着CLIP等预训练视觉-语言模型在视觉表征任务上的成功,将预训练模型迁移到下游任务是一种重要的范式。最近,受到NLP启发的提示微调范式在VL领域取得了巨大的进展。之前的方法主要集中在为视觉和文本输入构建提示模板上,但是忽略了VL模型和下游任务之间在类标签表示上的差距......
  • 强化学习(Monte Carlo learning)-Today6
    MonteCarlolearning简称MC,是model-free算法,也就是不基于模型的算法,Today5发布的valueiterationandPolicyiterationalgorithm是model-based算法,是基于模型的算法,也就是说,没有模型的构建,需要数据的支撑,MC包括三个算法,分别是MCBasic、MCExploringStarts和这三个算法,......