首页 > 其他分享 >COMP 627 COMP 627 Neural Networks and Applications

COMP 627 COMP 627 Neural Networks and Applications

时间:2024-08-21 18:26:43浏览次数:14  
标签:plot 627 COMP neuron two Applications input model data

1

COMP 627 – Assignment 1

Note: Referto Eq. 2.11 in the textbook for weight update. Both weights, w1 and b, need to be adjusted.

According to Eq. 2.11, for input x1, error E = t-y and learning rate β:

w1_new=w1_old+ β E x1;

bnew= bold+ β E

COMP 627 Neural Networks and Applications

Assignment 1

Perceptron and Linear neuron: Manual training and real-life case

studies

Part 1: Perceptron

[08 marks]

Download Fish_data.csv file from LEARN page. Use this dataset to answer the two questions (i) and (ii)

below on Perceptron. The dataset consists of 3 columns. The first two columns are inputs (ring

diameter of scales of fish grown in sea water and fresh water, respectively). The third column is the

output which states whether the category of the fish is Canadian or Alaskan (the value is 0 for Canadian

and 1 for Alaskan). Perceptron model classifies fish into Canadian or Alaskan depending on these two

measures of ring diameter of scales.

(i)

Extract the first AND last row of data and label these rows 1 and 2. Use an initial weight

vector of [w1= 102, w2= -28, b= 5.0] and learning rate β of 0.5 for training a perceptron

model manually as below:

Adjust the weights in example-by-example mode of learning using the two input vectors.

Present the input data in the order of rows 1 and 2 to the perceptron. After presentation

of each input vector and corresponding weight adjustment, show the resulting

classification boundary on the two data points as in Fig. 2.15 in the book. For each round

of weight adjustment, there will be a new classification boundary line. You can do the

plots on Excel, by hand, python or any other plotting software. Repeat this for 2 epochs

(i.e., pass the two input vectors twice through the perceptron).

(4 marks)

(ii)

Write python code to create a perceptron model to use the whole dataset in fish.csv to

classify fish into Canadian代 写COMP 627 COMP 627 Neural Networks and Applications  or Alaskan depending on the two input measures of ring

diameter of scales. Use 200 epochs for accurate models.

Modify your python code to show the final classification boundary on the data.

Write the equation of this boundary line.

Compare with the classification boundary in the book.

(4 marks)2

COMP 627 – Assignment 1

Note: For adjusting weights, follow the batch learning example for linear neuron on page 57 of the

textbook that follows Eq. 2.36. After each epoch, adjust the weights as follows:

w1_new=w1_old + β (E1 x1 + E2 x2)/2

bnew= bold + β (E1 + E2)/2

where E1 and E2 are the errors for the two inputs.

Part 2: Single Linear Neuron

[12 marks]

Download heat_influx_north_south.csv file from LEARN page. Use this dataset to develop a single

linear neuron model to answer the questions(i) to (v) below. Thisis the dataset that we learned about

in the text book and lectures where a linear neuron model had been trained to predict heat influx in

to a house from the north and south elevations of the house. Note that the dataset has been

normalised (between 0 and 1) to increase the accuracy of the models. When data (inputs and outputs)

have very different ranges, normalisation helps balance this issue.

(i)

Use two rows of data (rows 1 and 2 (0.319, 0.929) and (0.302, 0.49)), respectively, to train

a linear neuron manually to predict heat influx into a home based on the north elevation

(angle of exposure to the sun) of the home (value in ‘North’ column is the input for the

single neuron where output isthe value in ‘HeatFlux’ column). Use an initial weight vector

of [b (bias) = 2.1, w1= -0.2] and learning rate of 0.5. Bias input =1. You need to adjust

both weights, b and w1.

(3 marks)

  1. a) Train the linear neuron manually in batch mode. Repeat this for 2 epochs.

Note:

Try to separate the dataset into two datasets based on the value in ‘Canadian_0_Alaskan_1’ column.

Example code is given below.

#create dataframe X1 with input columns of the rows with the value 0 in 'Canadian_0_Alaskan_1' column

X1 = df.loc[df["Canadian_0_Alaskan_1"] == 0].iloc[:, 0:2]

Plot the data of two datasets with different markers‘o’ and ‘x’.

Plot the decision boundary line using the equation used in Laboratory Tutorial 2 – Part 2 (Please note

that there is a correction in the equation and the updated assignment is available on LEARN).

Final plot should be like this.3

COMP 627 – Assignment 1

1 2

Note: To retrieve the mean squared error, you can use the following code

from sklearn.metrics import mean_squared_error

print(mean_squared_error(Y, predicted_y))

  1. b) After the training with the 2 epochs is over, use your final weights to test how the

neuron is now performing by passing the same two data points again into the neuron

and computing error for each input (E1 and E2). Compute Mean Square Error (MSE)

for the 2 inputs using the formula below.

标签:plot,627,COMP,neuron,two,Applications,input,model,data
From: https://www.cnblogs.com/vvx-99515681/p/18372316

相关文章

  • 《勇者斗恶龙英雄》提示缺少vcomp110.dll怎么处理?勇者斗恶龙英雄遭遇缺失vcomp110.dll
    当系统提示缺少vcomp110.dll文件时,不要慌张。可以先尝试从可靠的来源重新下载该文件,并放置到正确的系统目录下。也可以使用系统修复工具进行全面检测和修复。同时,确保系统的相关组件和运行库都是最新版本。需注意,操作过程中要谨慎,以免引入其他问题。本篇将为大家带来《勇者斗恶......
  • A 3nm, 32.5TOPS/W, 55.0TOPS/mm2 and 3.78Mb/mm2 Fully-Digital Compute-in-Memory M
    1、强调存储密度(StorageDensity)Mb/mm2,存算一体的主要目的是减少数据搬运的开销,如果一味的堆计算单元而损失存储密度,那么虽然整体的计算吞吐率(TOPS)可以做到很大,相应的对计算密度也会有提升,但是由于需要频繁给CIMMacro刷新数据,从系统能效的角度上来说反而是下降的。这次的SRAMArr......
  • 异步编程CompletableFuture的一些使用demo
      publicstaticThreadPoolExecutorexecutor=newThreadPoolExecutor(5,5,5L,TimeUnit.SECONDS,newLinkedBlockingQueue<>(1000),newThreadPoolExecutor.CallerRunsPolicy());publicstaticvoidmain(String[]args)throwsException{Complet......
  • ace markdown editor 原生web components
    src/index.html:<!DOCTYPEhtml><htmllang="en"><head><metacharset="UTF-8"><metaname="viewport"content="width=device-width,initial-scale=1.0"><title>Document&......
  • A 12nm 121-TOPS/W 41.6-TOPS/mm2 All Digital Full Precision SRAM-based Compute-in
    1b*4b的操作是通过4b或非门乘法器完成,然后再通过4b加法器两两相加。但是从真值表上来看,2个4b或非门乘法器加1个4b加法器完成的工作实际上可以通过一个由加法器和两比特IN控制的四选一Mux(或者说LUT)来完成。这样做的话可以直接节省掉21%的功耗。提出的这个并行多位输入结构下(即并......
  • Understanding ODIS Component Protection for VW/Audi Vehicles
    WhatisODISComponentProtection?ComponentProtection(CP)isasecurityfeatureemployedbytheVW-AudiGrouptoensurethatelectroniccomponentsareproperlymatchedtothevehicletheyareassignedto.Thinkofitasanelectroniclockthattiesaco......
  • A 4nm 6163-TOPS/W/b 4790-TOPS/mm2/b SRAM Based Digital-Computing-in-Memory Macro
    SRAMarray和Localadder耦合在一起形成一个块,两个块share一个semi-global-adder,四个块再去shareGlobaladder和移位累加器。这样的floorplan使得整体结构上不存在一大块独立的巨型多级加法树,使得布局变得更加的规整。这里讨论了mix-Vt设计的问题,即混用高Vt管子和低Vt管子,高Vt......
  • An 89TOPS/W and 16.3TOPS/mm2 All-Digital SRAM-Based Full-Precision Compute-In Me
    权重是4bit的CIM结构图:激活值是4bit的做法是:以MSB-first的方式串性送入,然后通过移位加计算不同数位的和累加器就是一个移位累加结构,其中具有对符号位的处理机制,这里是补码机制。如果符号位是0,直接原码做符号位拓展加进去,如果符号位是1,取反加1原码转成补码之后加进去。减少......
  • [Web Component] using `part` to allow applying styling from outside the shadow D
    Let'ssaywehaveawebcomponent: import{getProductById}from"../services/Menu.js";import{addToCart}from"../services/Order.js";exportdefaultclassDetailsPageextendsHTMLElement{constructor(){super();......
  • Docker compose 部署前后端-----采用nginx代理,支持一个端口部署多个前端
    Dockercompose部署前后端-----采用nginx代理,支持一个端口部署多个前端1、Linux服务器安装最新版docker,确保有dockercompose命令2、创建docker工作区目录mkdirdocker-workspace3、进入docker工作区目录,创建前端nginx目录,创建后端xxx目录mkdirnginxxxx4、创建confi......