首页 > 其他分享 >COMP9444 Neural Networks and Deep Learning

COMP9444 Neural Networks and Deep Learning

时间:2024-07-02 09:02:38浏览次数:1  
标签:network -- py Deep your Learning hidden should Networks

COMP9444 Neural Networks and Deep Learning

Term 2, 2024

Assignment - Characters and Hidden Unit Dynamics

Due: Tuesday 2 July, 23:59 pm

Marks: 20% of final assessmentIn this assignment, you will be implementing and training neural network models for threedifferent tasks, and analysing the results. You are to submit two Python files and , as well asa written report (in format). kuzu.pycheck.pyhw1.pdfpdf

Provided Files

Copy the archive hw1.zip into your own filespace and unzip it. This should create a directory ,subdirectories and , and eight Python files , , , , , , and .

hw1netplotkuzu.pycheck.pykuzu_main.pycheck_main.pyseq_train.pyseq_models.pyseq_plot.pyanb2n.pyYour task is to complete the skeleton files andand submit them, along with your report.

kuzu.pycheck.py

Part 1: Japanese Character Recognition

For Part 1 of the assignment you will be implementing networks to recognize handwrittenHiragana symbols. The dataset to be used is Kuzushiji-MNIST or KMNIST for short. Thepaper describing the dataset is available here. It is worth reading, but in short: significant

changes occurred to the language when Japan reformed their education system in 1868,and the majority of Japanese today cannot read texts published over 150 years ago. Thispaper presents a dataset of handwritten, labeled examples of this old-style script(Kuzushiji). Along with this dataset, however, they also provide a much simpler one,containing 10 Hiragana characters with 7000 samples per class.This is the dataset we willbe using.Text from 1772 (left) compared to 1900 showing the standardization of written

Japanese.

  1. [1 mark] Implement a model which computes a linear function of the pixels in theimage, followed by log softmax. Run the code by typing: Copy the final accuracy andconfusion matrix into your report. The final accuracy should be around 70%. Note thatthe rows of the confusion matrix indicate the target character, while the columnsindicate the one chosen by the network. (0="o", 1="ki", 2="su", 3="tsu", 4="na",5="ha", 6="ma", 7="ya", 8="re", 9="wo"). More examples of each character can befound here. NetLinpython3 kuzu_main.py --net lin
  1. [1 mark] Implement a fully connected 2-layer network (i.e. one hidden layer, plus theoutput layer), using tanh at the hidden nodes and log softmax at the output node.Run the code by typing: Try different values (multiples of 10) for the number of hiddennodes and try to determine a value that achieves high accuracy (at least 84%) on thetest set. Copy the final accuracy and confusion matrix into yourreport, and include acalculation of the total number of independent parameters in the network. NetFullpython3 kuzu_main.py --net full
  1. [2 marks] Implement a convolutional network called , with two convolutional layersplus one fully connected layer, all using relu activation function, followed by theoutput layer, using log softmax. You are free to choose for yourself the number andsize of the filters, metaparameter values (learning rate and momentum), and whetherto use max pooling or a fully convolutional architecture. Run the code by typing: Yournetwork should consistently achieve at least 93% accuracy on the test set after 10training epochs. Copy the final accuracy and confusion matrix into your report, andinclude a calculation of the total number of independent parameters in the network.

NetConv

python3 kuzu_main.py --net conv

  1. [4 marks] Briefly discuss the following points:
  2. the relative accuracy of the three models,
  3. the number of independent parameters in each of the three models,
  4. the confusion matrix for each model: which characters are most likely to be

mistaken for which other characters, and why?

Part 2: Multi-Layer Perceptron

In Part 2 you will be exploring 2-layer neural networks (either trained, or designed by hand)to classify the following data:

  1. [1 mark] Train a 2-layer neural network with either 5 or 6 hidden nodes, using sigmoidactivation at both the hidden and output layer, on the above data, by typing: You mayneed to run the code a few times, until it achieves accuracy of 100%. If the networkappears to be stuck in a local minimum, you can terminate the process with ⟨ctrl⟩-Cand start again. You are free to adjust the learning rate and thenumber of hiddennodes, if you wish (see code for details). The code should produce images in thesubdirectory graphing the function computed by each hidden node () and thenetwork as a whole (). Copy these images into your report.python3 check_main.py --act sig --hid 6plothid_6_?.jpgout_6.jpg
  1. [2 marks] Design by hand a 2-layer neural network with 4 hidden nodes, using theHeaviside (step) activation function at both the hidden and output layer, whichcorrectly classifies the above data. Include a diagram of the network in your report,clearly showing the value of all the weights and biases. Write the equations for thedividing line determined by each hidden node. Create a table showing the activationsof all the hidden nodes and the output node, for each of the 9 training items, andinclude it in your report. You can check that your weights are correct by entering themin the part of where it says "Enter Weights Here", and typing: check.pypython3 check_main.py --act step --hid 4 --set_weights
  1. [1 mark] Now rescale your hand-crafted weights and biases from Part 2 by multiplyingall of them by a large (fixed) number (for example, 10) so that the combination ofrescaling followed by sigmoid will mimic the effect of the step function. With these rescaled weights and biases, the data should be correctly classified by the sigmoidnetwork as well as the step function network. Verify that this is true by typing: Onceagain, the code should produce images in the subdirectory showing the functioncomputed by each hidden node () and the network as a whole (). Copy these imagesinto your report, and be ready to submit with the (rescaled) weights as part of yourassignment submission.python3 check_main.py --act sig --hid 4 --set_weightsplothid_4_?.jpgout_4.jpgcheck.py

Part 3: Hidden Unit Dynamics for Recurrent Networks

In Part 3 you will be investigating the hidden unit dynamics of recurrent networks trainedon language prediction tasks, using the supplied code and . seq_train.pyseq_plot.py1. [2 marks] Train a Simple Recurrent Network (SRN) on the RebeGrammar predictionask by typing This SRN has 7 inputs, 2 hidden units and 7 outputs. The trainednetworks are stored every 10000 epochs, in the subdirectory. After the trainingfinishes, plot the hidden unit activations at epoch 50000 by typing The dots should bearranged in discernable clusters by color. If they are not, run the code again until thetraining is successful. The hidden unit activations are printed according to their "state",using the colormap "jet": Based on this colormap, annotate your figure (eitherelectronically, or with a pen on a printout) by drawing a circle around the cluster ofpoints corresponding to each state in the state machine, and drawing arrows betweenthe states, with each arrow labeled with its corresponding symbol. Include theannotated figure in your report.

python3 seq_train.py --lang rebernet

python3 seq_plot.py --lang reber --epoch 50

  1. [1 mark] Train an SRN on the a nb n language prediction task by typing The a nb nlanguage is a concatenation of a random number of A's followed by an equal numberof B's. The SRN has 2 inputs, 2 hidden units and 2 outputs.python3 seq_train.py --lang anbnLook at the predicted probabilities of A and B as the training progresses. The first B ineach sequence and all A's after the first A are not deterministic and can only bepredicted in a probabilistic sense. But, if the training is successful, all other symbolsshould be correctly predicted. In particular, the network should predict the last B ineach sequence as well as the subsequent A. The error should be consistently in therange of 0.01 to 0.03. If the network appears to have learned the task successfully, youcan stop it at any time using ⟨cntrl⟩-c. If it appears to be stuck in a local minimum, youcan stop it and run the code again until it is successful.After the training finishes, plot the hidden unit activations by typingpython3 seq_plot.py --lang anbn --epoch 100Include the resulting figure in your report. The states are again printed according tothe colormap "jet". Note, however, that these "states" are not unique but are insteaused to count either the number of A's we have seen or the number of B's we are stillexpecting to see.Briefly explain how the a nb n prediction task is achieved by the network, based on thegenerated figure. Specifically, you should describe how the hidden unit activationschange as the string is processed, and how it is able to correctly predict the last B ineach sequence as well as the following A.
  1. [2 marks] Train an SRN on the a nb n c n language prediction task by typing The SRNnow has 3 inputs, 3 hidden units and 3 outputs. Again, the "state" is used to count uphe A's and count down the B's and C's. Continue training (and re-start, if necessary)or 200k epochs, or until the network is able to reliably predict all the C's as well as thesubsequent A, and the error is consistently in the range of 0.01 to 0.03.python3 seq_train.py --lang anbncnAfter the training finishes, plot the hidden unit activations at epoch 200000 by typingpython3 seq_plot.py --lang anbncn --epoch 200(you can choose a different epoch number, if you wish). This should produce threeimages labeled , and also display an interactive 3D figure. Try to rotate the figure in 3dimensions to get one or more good view(s) of the points in hidden unit space, savethem, and include them in your report. (If you can't get the 3D figure to work on yourmachine, you can use the images anbncn_srn3_??.jpganbncn_srn3_??.jpg)Briefly explain how the a nb n c n prediction task is achieved by the network, based onthe generated figure. Specifically, you should describe how the hidden unit activationschange as the string is processed, and how it is able to correctly predict the last B ineach sequence as well as all of the C's and the following A.
  1. [3 marks] This question is intended to be more challenging. Train an LSTM network topredict the Embedded Reber Grammar, by typing You can adjust the number ofhidden nodes if you wish. Once the training is successful, try to analyse the behaviorof the LSTM and explain how the task is accomplished (this might involve modifyingthe code so that it returns and prints out the context units as well as the hidden units).

python3 seq_train.py --lang reber --embed True --model lstm --hid 4

Submission

You should submit by typinggive cs9444 hw1 kuzu.py check.py hw1.pdfYou can submit as many times as you like — later submissions will overwrite earlier ones.You can check that your submission has been received by using the following command:9444 classrun -check hw1The submission deadline is Tuesday 2 July, 23:59pm. In accordance with UNSW-widepolicies, 5% penalty will be applied for every 24 hours late after the deadline, up to amaximum of 5 days, after which submissions will not be accepted.Additional information may be found in the FAQ and will be considered as part of thespecification for the project. You should check this page regularly.Plagiarism Policy Group submissions will not be allowed for this assignment. Your code and report must beentirely your own work. Plagiarism detection software will be used to compare allsubmissions pairwise (including submissions for similar assignments from previous offering, appropriate) and serious penalties will be appliedparticularly in the case of repeatoffences.

DO NOT COPY FROM OTHERS; DO NOT ALLOW ANYONE TO SEE YOUR CODE

Please refer to the UNSW Policy on Academic Integrity and Plagiarism if you require furtherlarification on this matter.Good luck!

标签:network,--,py,Deep,your,Learning,hidden,should,Networks
From: https://www.cnblogs.com/qq99515681/p/18279169

相关文章

  • Identity-aware Graph Neural Networks
    目录概ID-GNNYouJ.,Gomoes-SelmanJ.,YingR.andLeskovecJ.Identity-awaregraphneuralnetworks.AAAI,2021.概提出了一种能够超越1-WL-Test的GNN.ID-GNNID-GNN的motivation主要如下:主要到,传统的MPNN,即第\(k\)层:\[\mathbf{m}_u^{(k)}=\t......
  • DeepMind的新论文,长上下文的大语言模型能否取代RAG或者SQL这样的传统技术呢?
    长上下文大型语言模型(LCLLMs)确实引起了一些关注。这类模型可能使某些任务的解决更加高效。例如理论上可以用来对整本书进行总结。有人认为,LCLLMs不需要像RAG这样的外部工具,这有助于优化并避免级联错误。但是也有许多人对此持怀疑态度,并且后来的研究表明,这些模型并没有真正利用长上......
  • 伪装目标检测论文阅读 VSCode:General Visual Salient and Camouflaged Object Detect
    论文link:link代码:code1.摘要  显著物体检测和伪装物体检测是相关但又不同的二元映射任务,这些任务涉及多种模态,具有共同点和独特线索,现有研究通常采用复杂的特定于任务的专家模型,可能会导致冗余和次优结果。我们引入了VSCode,这是一种具有新颖的2D提示学习的通用模型,用于......
  • (五)DeepSpeed Chat: 一键式RLHF训练,让你的类ChatGPT千亿大模型提速省钱15倍
    DeepSpeedChat:一键式RLHF训练,让你的类ChatGPT千亿大模型提速省钱15倍如需引用DeepSpeedChat,请引用我们的arxivreport:@article{yao2023dschat,title={{DeepSpeed-Chat:Easy,FastandAffordableRLHFTrainingofChatGPT-likeModelsatAllScales}},autho......
  • Machine Learning and Artifcial Intelligence -2nd Edition(人工智能与机器学习第二版
    #《人工智能和机器学习》由AmeetV.Joshi撰写,是一本关于人工智能(AI)和机器学习(ML)的综合性教材,旨在为学生和专业人士提供基础理论、算法和实际应用的全面指导。这本书分为七个部分,涵盖了从基础概念到高级应用的广泛内容。#内容结构PartI:Introduction本部分介绍了人工智......
  • 【FAS】《Application of machine learning to face Anti-spoofing detection》
    文章目录原文相关工作方法静态Gabor小波和动态LBP的融合特征基于GAN的数据增强人脸活体检测方法半监督学习用于图像修复的人脸活体检测点评原文李莉.反欺骗人脸活体图像的机器学习方法研究[D].广东工业学,2020.DOI:10.27029/d.cnki.ggdgu.2020.001204.相关......
  • COMP9444 Neural Networks and Deep Learning
    COMP9444 Neural Networksand Deep LearningTerm 2, 2024Assignment -Charactersand Hidden Unit DynamicsDue:Tuesday2July, 23:59 pmMarks:20%of final assessmentInthisassignment,youwill be implementingandtraining neural network m......
  • 【论文笔记】Parameter-Effificient Transfer Learning for NLP
    题目:Parameter-EffificientTransferLearningforNLP阅读文章目录0.摘要1.引言2AdaptertuningforNLP3实验3.1参数/性能平衡3.2讨论4.相关工作0.摘要克服微调训练不高效的问题,增加一些adapter模块,思想就是固定原始的网络中的参数,针对任务增加一些可以训练......
  • 星海AI-GPU算力云平台:【神农-DeepFaceLab】云训练
    镜像介绍:DeepFaceLab(简称DFL)是一个GitHub上的开源项目,使用Python编写,基于TensorFlow框架。DFL的目标是提供一个易于使用的工具,使视频换脸变得更加简单和高效。DFL的作者之一还建设了一个活跃的DeepFaceLab中文论坛,上面有许多教程、讨论、素材和模型分享,为DFL的使用者提供了丰......
  • 【论文翻译】DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in C
    本翻译来自大模型翻译,如有不对的地方,敬请谅解引言开源社区通过开发诸如StarCoder(Li等人,2023b;Lozhkov等人,2024)、CodeLlama(Roziere等人,2023)、DeepSeek-Coder(Guo等人,2024)和Codestral(MistralAI,2024)等开源代码模型,在推进代码智能方面取得了显著进展。这些模型的性能已稳步接近......