首页 > 其他分享 >2802ICT Intelligent System 智能系统

2802ICT Intelligent System 智能系统

时间:2023-05-27 09:12:08浏览次数:43  
标签:training System layer should test output 2802ICT Intelligent your


Assignment 2
2802ICT Intelligent System
School of ICT, Griffith University
Trimester 1, 2023
Instructions:
• Due: Monday 29th May 2023, 11:59 PM with demonstrations to be held on Week 12.
• Marks: 50% of your overall grade
• Late submission: Late submission is allowed but a penalty applies. The penalty is defined as the
reduction of the mark allocated to the assessment item by 5% of the total weighted mark for the
assessment item, for each working day that the item is late. A working day is defined as Monday to
Friday. Assessment items submitted more than five working days after the due date will be
awarded zero marks.
• Extensions: You can request for an extension of time on one of two grounds: medical or special
circumstances ((e.g. special family or personal circumstances, unavoidable commitments).
All requests must be made through the myGriffith portal.
• Individual Work: You must complete代写 2802ICT Intelligent System this assignment individually and submit an authentic work.
This should be your own work. Anyone caught plagiarising will be penalised and reported to the
university.
• Presentation: You must present/demonstrate your work. Your work will not be marked if you do
not present it.
Maximum marks are:
- 25 for Task 1
- 40 for Task 2
- 20 for the report
- 15 for the interview (demonstration + open questions)
Total: 100 marks
Objectives:
✓Implement learning algorithms
✓Evaluate the performance of a simple learning system on a real-world dataset
Your code should be written in Python. If you wish to use another programming language please
contact the course convenor for an approval.
Note that - you are NOT ALLOWED to use libraries such as sklearn, pytorch, tensorflow to
implement the algorithms. You are allowed to use libraries for data structures, math functions and
the libraries mentioned in the tasks description below.
Task 1 - Decision Trees
Implementation Requirements
a. Read the votes.csv dataset, which includes a list of US Congress members, the name of their
party and features that represent their voting patterns on various issues.
b. Use the data from (a) to build a decision tree for predicting party from voting patterns.
c. You should be able to randomly split the dataset into testing and training set.
d. Test your classifier and print:
1. The size of the training and testing sets
2. Total accuracy
3. Confusion matrix
4. precision, recall and F1-score values for each class, together with the macro-average
and weighted -average
5. Plot the Learning curve (accuracy as a factor of percentage of learning example) i.e.
show how the accuracy changes while learning.
For example, if the training set includes 1000 samples, you may stop building the tree
after 100 samples and test with the current tree, then continue building the tree and
stop again to test after 200 samples, and so on until you have the final decision tree
which is built using the full training set.
Note: To make plots, you can use the Python library matplotlib.
Report Requirements
Your report should include:
a. Software design: a detailed description of your code and how it works. Include details on key
functions and data structures you are using in your program.
b. The results and graphs from paragraph (d.) above.
c. Discuss the results, include any observations or insights you gained from running tests.
d. Write a conclusion paragraph that summaries and explains your findings.
Submission
For task 1 you should submit the following:
1. zip file that includes your code (.py files). Name the file:
firstname_surname_Snumber_A2T1.zip (for example,
Bart_Simpson_s123456_A2M1.zip)
2. A copy of your source code in one of the following formats: Microsoft Word (.doc/.docx), Plain
text (.txt), or Adobe PDF (.pdf).
* Make sure your code is well documented and include comments to explain key parts.
Submission should be done through the link provided in L@G by the due date.
Marking scheme
A detailed marking rubric will be provided in L@G.
Task 2 - Classification using Neural Network
In this task, you need to implement a popular machine learning algorithm, Neural Networks. You will
train and test a simple neural network with the datasets provided to you and experiment with
different settings of hyper parameters.
The dataset to learn is called the CIFAR-10 dataset and is a subset of the 80 million tiny
images dataset. This dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images
per class. There are 50,000 training images and 10,000 test images.
The train and test datasets can be downloaded from here. Make sure you download the CIFAR-10
python version. You will also find there instructions on how to open the files as a python dictionary.
Data
Each image is 32x32 pixels, which is 1,024 pixels in total. Each pixel has an RGB (Red,Green and Blue)
value represented as an integer between 0 and 255. Hence, an image in the dataset is represented as
an array of size 3,072 (32x32x3), in which the first 1024 entries contain the red channel values, the
next 1024 the green, and the final 1024 the blue.
You will notice that the dataset is divided into five training batches and one testing batch, each
contains 10,000 examples.
Labels
Each training and testing image is assigned to one of the following labels:
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
Implementation requirements
Your task is to implement a neural network learning algorithm that creates a neural network
classifier based on the given training dataset. Your network will have three layers: an input layer, one
hidden layer and an output layer.
The nonlinearity used in your neural net should be the basic sigmoid function:
The main steps of training a neural net using stochastic gradient descent are:
1. Assign random initial weights and biases to the neurones. Each initial weight or bias is a random
floating-point number drawn from the standard normal distribution (mean 0 and variance 1).
2. For each training example in a mini-batch, use backpropagation to calculate a gradient estimate,
which consists of the following steps:
a. Feed forward the input to get the activations of the output layer.
b. Calculate derivatives of the cost function for that input with respect to the activations of
the output layer.
c. Calculate the errors for all the weights and biases of the neurones using backpropagation.
3. Update weights (and biases) using stochastic gradient descent:
where m is the number of training examples in a mini-batch, is the error of weight w for
input i, and is the learning rate.
4. Repeat this for all mini-batches. Repeat the whole process for specified number of epochs. At the
end of each epoch evaluate the network on the test data and display its accuracy.
σ(x) = 1
1 + e−x
w → w − η
m
m

i=1
errorw
i
errorw
i
η
For this part, use the quadratic cost function:
where
w: weights
b: biases
n: number of test instances
xi: ith test instance vector
yi: ith test label vector, i.e. if the label for xi is 8, then yi will be [0,0,0,0,0,0,0,0,1,0] (see below)
f(x): Label predicted by the neural network for an input x
For this images recognition assignment, we will encode the output (a number between 0 and 9) by
using 10 output neurones. The neurone with the highest activation will be taken as the prediction of
the network. So the output number y has to be represented as a vector of 10 binary digits, all of
them being 0 except for the entry at the correct digit.
It is important that your code enable simple and easy adjustment of key parameters such as the
number of neurones in the input, hidden and output layers, as well as the number of epochs, minibatch size and learning rate.
Your code should also run on a small neural network for testing purposes. See below.
You should do the following:
CIFAR-10 dataset:
1. Create a neural network of size [3072, 30, 10], i.e. 3072 neurones in the input layer, 30 neurones
in the hidden layer, and 10 neurones in the output layer. Then train it on the training dataset with
the following settings: epoch = 20, mini-batch size = 100, = 0.1.
Test your code with the testing dataset and draw a graph of test accuracy vs epoch. Print the
maximum accuracy achieved.
2. Learning rate: Train a new neural network with the same settings as in (1.) but with different
learning rates = 0.001, 0.01, 1.0, 10, 100. Plot a graph of test accuracy vs epoch for each on
the same graph. Print the maximum accuracy achieved for each . Remember to create a new
neural net each time so its starts learning from scratch.
3. Mini-batch size: Train new neural net with the same settings as in (1.) above but with different
mini-batch sizes = 1, 5, 20, 100, 300. Plot maximum test accuracy vs mini-batch size. Which one
achieves the maximum test accuracy? Which one is the slowest?
C(w, b) = 1
2n
n

i=1
|| f(xi
) − yi||
2
η
η η
η
Testing with a small neural network:
The above small network has two neurones in the input layer, two neurones in the hidden layer and
two neurones in the output layer. Weights and biases are marked on the figure. W9, w10, w11, w12
are weights for the bias term.
There are two training samples: X1 = (0.1, 0.1) and X2 = (0.1, 0.2). The label for X1 is 0, so the desired
output for X1 in the output layer should be Y1 = (1, 0). The label for X2 is 1, so the desired output for
X2 in the output layer should be Y2 = (0, 1).
You should update the weights and biases for this small neural net using stochastic gradient descent
with back-propagation using batch size of 2 and a learning rate of 0.1 in one epoch, i.e. run forward
pass and backward pass for X1, then run forward pass and backward pass for X2, then update the
twelve weights based on the values you calculated from X1 and X2.
Output: Print the new (updated) twelve weights.
A sample output for the small network will be available on L@G to allow you to test your
implementation.
Report requirements
Your report should include:
a. Software design: information about key functions and data structure.
b. All experimental results and graphs mentioned above, and detailed explanations of these results.
i.e. try to explain the logic behind those results.
c. Write a conclusion paragraph that summaries and explains your findings.
Submission
For task 2 you should submit the following:
4. zip file that includes your code (.py files). Name the file:
firstname_surname_Snumber_A2T2.zip (for example,
Bart_Simpson_s123456_A2M2.zip)
5. A copy of your source code in one of the following formats: Microsoft Word (.doc/.docx), Plain
text (.txt), or Adobe PDF (.pdf).
6. Report: You can include task 1 and 2 in the same report or submit two seperate reports.
Submission should be done through the link provided in L@G by the due date. Make sure you submit
at least five files: two zip files, two copies of the code, and a report.
Marking scheme
WX:codehelp

标签:training,System,layer,should,test,output,2802ICT,Intelligent,your
From: https://www.cnblogs.com/simpleyfc/p/17436252.html

相关文章

  • `systemctl` 启动单个服务,其中包含多个进程
    要使用systemctl启动单个服务,其中包含多个进程,你可以使用Systemd的template机制。以下是使用Systemd'template'以创建一个可同时启动多个进程的服务单元文件的过程:为你的服务创建一个template服务单元文件。服务单元文件通常位于/etc/systemd/system目录中。使用......
  • Linxu解决systemctl启动服务失败,Error: No space left on device
    查看磁盘空间实际占用情况查看磁盘inodes占用情况这两部发现都没有问题。要是哪里发现被沾满了,直接删除解放空间。此篇是讲另一种情况。查看默认inotify的max_user_watches值[root@VM-4-4-centosnginx]#sysctlfs.inotifyfs.inotify.max_queued_events=16384fs.inotif......
  • How to fix CMake error Could not find a package configuration file provided by
    CMakeErrorat/usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake:117(find_package):Couldnotfindapackageconfigurationfileprovidedby"boost_filesystem"(requestedversion1.71.0)withanyofthefollowingnames:boos......
  • NIST SP 800-37 Risk Management Framework for Information Systems and Organizatio
    NISTSP800-37RiskManagementFrameworkforInformationSystemsandOrganizationsASystemLifeCycleApproachforSecurityandPrivacy Itstructuredinto3levelorganizationview,businessmissionandinformationsystemview.800-37isshortforNIST......
  • ES启动错误 ERROR: the system property [es.path.conf] must be set
    JDK版本出问题。我的ElasticSearch版本为7.9.1,JDK版本为11对于不想修改系统JDK版本的可以使用elasticsearch自带的jdk启动服务。1.找到 elasticsearch-env.bat 中的if"%JAVA_HOME%"==""(setJAVA="%ES_HOME%\jdk\bin\java.exe"setJAVA_HOME="%ES_HOME%\jd......
  • Nvm 安装node报错: The system cannot find the path specified.
    解决思路:1.确保你安装nvm之前node.js已经删除干净了。这一步如果不会请移步:https://blog.csdn.net/m0_51945510/article/details/127710792这个是要删除的。 2.确保你点击的安装路径中,没有空格和中文,并且确定存在这个目录(安装时,不会帮你新建文件夹)。  上面两张图只......
  • Distributed System and Application
    Assignment2:DistributedSystemandApplicationCloudComputingandDistributedSystems(CLOUDS)LaboratorySchoolofComputingandInformationSystemsTheUniversityofMelbourne,AustraliaOthercontributors:AllTutors2Project:DistributedSharedWhiteBoa......
  • new Date().getTime()和System.currentTimeMillis()获取时间戳的比较
    最近在优化项目代码,看项目组的代码时,发现了一个有趣的现象,有使用newDate().getTime()来获取时间戳的,也有使用System.currentTimeMillis()来获取时间戳的,这让我想到,好像我平时写代码也是想起哪种方式就用什么方式写。这两种方式都可以,仔细思考一下,两者应该会有区别的,应该有是最优......
  • System类
    System类System类代表系统,系统级的很多属性和控制方法都放置在该类的内部。该类位于java.lang包由于该类的构造器是private的,所以无法创建该类的对象,也就是无法实例化该类。其内部的成员变量和成员方法都是static的,所以也可以很方便的进行调用。成员变量System类内部包含in......
  • c#中用System.Diagnostics.Process.Start(Path.GetFullPath(“vlc.exe.lnk“), url);用
    vlc.exe.lnk双击这个文件,能正常打开vlc,但是用System.Diagnostics.Process.Start(Path.GetFullPath("vlc.exe.lnk"),url);没有任何反应。根据常理,不应该出现这个问题。但是现实就是这么魔幻,偏偏有这个问题。根据上面图,根据快捷方式是可以获取到vlc可执行文件的路径的,然后在网上搜索......