首页 > 其他分享 >3007_7059  Artificial Intelligence

3007_7059  Artificial Intelligence

时间:2024-09-08 20:24:41浏览次数:4  
标签:7059 Intelligence tree sample 3007 program data your wine

Assignment 2: Artificial Intelligence (3007_7059 Combined)

Assignment 2

The dataset is available here

(https://myuni.adelaide.edu.au/courses/95211/files/14537288/download)

Part 1 Wine Quality Prediction with 1NN (K-d Tree)

Wine experts evaluate the quality of wine based on sensory data. We could also collect the features of wine from objective tests, thus the objective features could be used to predict the expert’s judgment, which is the quality rating of the wine. This could be formed as a supervised learning problem with the objective features as the data features and wine quality rating as the data labels.

In this assignment, we provide objective features obtained from physicochemical statistics for each white wine sample and its corresponding rating provided by wine experts. You are expected to implement the k-d tree (KDT) and use the training set to train your k-d tree, then provide wine quality prediction on the test set by searching the tree

Wine quality rating is measured in the range of 0-9. In our dataset, we only keep the samples for quality ratings 5, 6 and 7. The 11 objective features are listed as follows [1]:

f_acid : fixed acidity

v_acid : volatile acidity

c_acid : citric acid

res_sugar : residual sugar

chlorides : chlorides

fs_dioxide : free sulfur dioxide

ts_dioxide : total sulfur dioxide

density : density

pH : pH

sulphates : sulphates

alcohol : alcohol

Explanation of the Data.

train: The first 11 columns represent the 11 features and the 12th column is the wine quality. A sample is depicted as follows:

f_acid

v_acid

c_acid

res_sugar

chlorides

fs_dioxide

ts_dioxide

density

 

sulphates

alcohol

quality

8.10

0.270

0.41

1.45

0.033

11.0

63.0

0.99080

2.99

0.56

12.0

5

8.60

0.230

0.40

4.20

0.035

17.0

109.0

0.99470

3.14

0.53

9.7

5

7.90

0.180

0.74

1.20

0.040

16.0

75.0

0.99200

3.18

0.63

10.8

5

8.30

0.420

0.62

19.25

0.040

41.0

172.0

1.00020

2.98

0.67

9.7

5

6.50

0.310

0.14

7.50

0.044

34.0

133.0

0.99550

3.22

0.50

9.5

5

test: The first 11 columns represent the 11 features and the 12th column is the wine quality. A sample is depicted as follows:

f_acid

v_acid

c_acid

res_sugar

chlorides

fs_dioxide

ts_dioxide

density

pH

sulphates

alcohol

7.0

0.360

0.14

11.60

0.043

35.0

228.0

0.99770

3.13

0.51

8.900000

6.3

0.270

0.18

7.70

0.048

45.0

186.0

0.99620

3.23

0.47

9.000000

7.2

0.290

0.20

7.70

0.046

51.0

174.0

0.99582

3.16

0.52

9.500000

7.1

0.140

0.35

1.40

0.039

24.0

128.0

0.99212

2.97

0.68

10.400000

7.6

0.480

0.28

10.40

0.049

57.0

205.0

0.99748

3.24

0.45

9.300000

1.1 1NN (K-d Tree)

From the given training data, our goal is to learn a function that can predict the wine quality rating of a wine sample, based on the objective features. In this assignment, the predictor function will be constructed as a k-d tree. Since the attributes (objective features) are continuously valued, you shall apply the k-d tree algorithm for continuous data, as outlined in Algorithms 1. It is the same as taught in the lecture. Once the tree is constructed, you will search the tree to find the 1-nearest neighbour of a query point and label the query point. Please refer to the search logic taught in the lecture to write your code for the 1NN search.

 

Algorithm 1 BuildKdTree(P, D) Require: A set of points P of M dimensions and current depth D. 1: if P is empty then 2: return null 3: else if P only has one data point then 4: Create new node node 5: node.d ← d 6: node.val ← val 7: node.point ← current point 8: return node 9: else 10: d ← D mod M 11: val ← Median value along dimension among points in P. 12: Create  代 写3007_7059  Artificial Intelligence new node node. 13: node.d ← d 14: node.val ← val 15: node.point ← point at the median along dimension d 16: node.left ← BuildKdTree(points in P for which value at dimension d is less than or equal to val, D+1) 17: node.right ← BuildKdTree(points in P for which value at dimension d is greater than val, D+ 1) 18: return node 19: end if

Note: Sorting is not necessary in some cases depending on your implementation. Please figure out whether your code needs to sort the number first. Also, if you compute the median by yourself, when there’s an even number of points, say [1,2,3,4], the median is 2.5.

 

1.2 Deliverable

Write your k-d tree program in Python 3.6.9 in a file called nn_kdtree.py. Your program must be able to run as follows:

$ python nn_kdtree.py [train] [test] [dimension]

The inputs/options to the program are as follows:

[train] specifies the path to a set of the training data file

[test] specifies the path to a set of testing data file

[dimension] is used to decide which dimension to start the comparison. (Algorithm 1)

Given the inputs, your program must construct a k-d tree (following the prescribed algorithms) using the training data, then predict the quality rating of each of the wine samples in the testing data. Your program must then print to standard output (i.e., the command prompt) the list of predicted wine quality ratings, vertically based on the order in which the testing cases appear in [test].

1.3 Python Libraries

You are allowed to use the Python standard library to write your k-d tree learning program (see https://docs.python.org/3/library/(https://docs.python.org/3/library/) for the components that make up the Python v3.6.9 standard library). In addition to the standard library, you are allowed to use NumPy and Pandas. Note that the marking program will not be able to run your program to completion if other third-party libraries are used. You are NOT allowed to use implemented tree structures from any Python package, otherwise the mark will be set to 0.

1.4 Submission

You must submit your program files on Gradescope. Please use the course code NPD6JD to enroll in the course. Instructions on accessing Gradescope and submitting assignments are provided at https://help.gradescope.com/article/5d3ifaeqi4-student-canvas (https://help.gradescope.com/article/5d3ifaeqi4-student-canvas) .

For undergraduates, please submit your k-d tree program (nn_kdtree.py) to Assignment 2 - UG.

1.5 Expected Run Time

Your program must be able to terminate within 600 seconds on the sample data given.

 

1.6 Debugging Suggestions

Step-by-step debugging by checking intermediate values/results will help you to identify the problems of your code. This function is enabled by most of the Python IDE. If not in your case, you could also print the intermediate values out. You could use sample data or create data in the same format for debugging

1.7 Assessment

Gradescope will compile and run your code on several test problems. If it passes all tests, you will get 15% (undergrads) or 12% (postgrads) of the overall course mark. For undergraduates, bonus marks of 3% will be awarded if Section 2 is completed correctly.

There will be no further manual inspection/grading of your program to award marks based on coding style, commenting, or “amount” of code written.

1.8 Using other source code

You may not use other source code for this assignment. All submitted code must be your own work written from scratch. Only by writing the solution yourself will you fully understand the concept.

1.9 Due date and late submission policy

This assignment is due by 11:59 pm Friday 3 May 2024. If your submission is late, the maximum mark you can obtain will be reduced by 25% per day (or part thereof) past the due date or any extension you are granted.

Part 2 Wine Quality Prediction with Random Forest

For postgraduate students, completing this section will give you the remaining 3% of the assignment marks. In this task, you will extend your knowledge learned from k-d tree to k-d forest. The process for a simplified k-d forest given N input-output pairs is:

1. Randomly select a set of N' distinct samples (i.e., no duplicates) where N' = N' * 80% (round to integer). This dataset is used for constructing a k-d tree (i.e., the root node of the k-d tree)

 

2. Build a k-d tree on the dataset from (1) and apply Algorithm 1.

3. Repeat (1) and (2) until reaching the maximum number of trees.

This process is also shown in Algorithm 2. In k-d forest learning, a sample set is used to construct a k-d tree. That is to say, different trees in the forest could have different root data. For prediction, the k-d forest will choose the most voted label as its prediction. For the wine quality prediction task, you shall apply Algorithm 2 for k-d forest learning and apply Algorithm 3 to predict the wine quality for a new wine sample. To generate samples, please use the following (incomplete) code to generate the same samples as our testing scripts:

import random ... N= ... N’=... index_list = [i for i in range(0, N)] # create a list of indexes for all data sample_indexes = [] for j in range(0,n_tree): random.seed(rand_seed+j) # random_seed is one of the input parameters subsample_idx = random.sample(index_list, k=N’) # create unique N’ indices sample_indexes = sample_indexes + subsample_id Algorithm 2 KdForest(data, d_list, rand_seed) Require:data in the form. of N input-output pairs ,d_list a list of depth 1: forest ← [] 2: n_trees ← len(d_list) 3: sample_indexes ← N'*n_trees integers with value in [0,N) generated by using above method 4: count ← 0 5: for count < n_trees do 6: sampled_data ← N' data pairs selected by N' indexes from sample_indexes sequentially 7: n = BuildKdTree(sampled_data, d_list[count]) ⇒ Algorithm 1 8: forest.append(n)

 

9: end for 10: return forest Algorithm 3 Predict_KdForest(forest, data) Require: forest is a list of tree roots, data in the form. of attribute values x. 1: labels ← [] 2: for Each tree n in the forest do 3: label ← 1NN search on tree n 4: labels.append(n) 5: end for 6: return the most voted label in labels

2.1 Deliverables

Write your random forest program in Python 3.6.9 in a file called nn_kdforest.py. Your program must be able to run as follows

$ python nn_kdforest.py [train] [test] [random_seed] [d_list]

The inputs/options to the program are as follows:

[train] specifies the path to a set of the training data file

[test] specifies the path to a set of testing data file

[random_seed] is the seed value generate random values.

[d_list] is a list of depth values (in Algorithm 2 n_trees==len(d_list))

Given the inputs, your program must learn a random forest (following the prescribed algorithms) using the training data, then predict the quality rating of each wine sample in the testing data. Your program must then print to standard output (i.e., the command prompt) the list of predicted wine quality ratings, vertically based on the order in which the testing cases appear in [test].

Submit your program in the same way as the submission for Sec. 1. For postgraduates, please submit your learning programs (nn_kdtree.py and nn_kdforest.py) to Assignment 2 - PG. The due date, late submission policy, and code reuse policy are also the same as in Sec 1.

 

2.2 Expected Run Time

Your program must be able to terminate within 600 seconds on the sample data given.

2.3 Debugging Suggestions

In addition to Sec. 1.6, another value worth checking when debugging is (but not limited to): the sample_indexes – by setting a random seed, the indexes should be the same each time you run the code

2.4 Assessment

Gradescope will compile and run your code on several test problems. If it passes all tests, you will get 3% of the overall course mark.

 

标签:7059,Intelligence,tree,sample,3007,program,data,your,wine
From: https://www.cnblogs.com/qq--99515681/p/18403380

相关文章

  • 2024 International Conference on Artificial Intelligence and Digital Management
    文章目录一、会议详情二、重要信息三、大会介绍四、出席嘉宾五、征稿主题六、咨询一、会议详情二、重要信息大会官网:https://ais.cn/u/vEbMBz提交检索:EICompendex、IEEEXplore、Scopus大会时间:2024年9月20-22日大会地点:中国-南京三轮截稿:2024年9月14日三、大会......
  • 力扣刷题——3007.价值和小于等于 K 的最大数字
    根据题意,不难想到该题的暴力解法,从数字1开始,逐个累加。每次检查由当前数字num所构成的累加价值是否大于k,假如为真,那么可以输出上一个数字,即num-1classSolution{public:longlongfindMaximumNumber(longlongk,intx){longlongsubSum=0;for(lon......
  • Springboot计算机毕业设计旅游景点综合服务系统30079
    本系统(程序+源码+数据库+调试部署+开发环境)带论文文档1万字以上,文末可获取,系统界面在最后面。系统程序文件列表用户,景点管理,景点信息,订票信息,纪念商品,商品购买,景点分类,商品分类开题报告内容一、选题背景及意义随着旅游业的蓬勃发展,游客对于旅游体验的需求日益多元......
  • Apple Intelligence提示词曝光:Do not hallucinate;XLabs-AI 又发布两个脚本x-flux;吴恩
    ✨1:Somepre-promptinstructionsforAppleAppleIntelligence提示词曝光:Donothallucinate苹果在其最新的开发者测试版中推出了一些生成型AI功能,这些功能已经在WWDC大会上宣布,计划在未来几个月内陆续登陆iPhone、iPad和Mac等设备。一位用户在macOS15.1开发者测......
  • D37 2-SAT P3007 [USACO11JAN] The Continental Cowngress G
    视频链接:D372-SATP3007[USACO11JAN]TheContinentalCowngressG_哔哩哔哩_bilibili  P3007[USACO11JAN]TheContinentalCowngressG-洛谷|计算机科学教育新生态(luogu.com.cn)//O(n*n)#include<iostream>#include<cstring>#include<algorithm>usin......
  • Meta SAM 2:实时分割图片和视频中对象;Apple Intelligence 首个开发者测试版发布丨 RTE
      开发者朋友们大家好: 这里是「RTE开发者日报」,每天和大家一起看新闻、聊八卦。我们的社区编辑团队会整理分享RTE(Real-TimeEngagement)领域内「有话题的新闻」、「有态度的观点」、「有意思的数据」、「有思考的文章」、「有看点的会议」,但内容仅代表编辑的个人观点,......
  • 2024 5th International Conference on Big Data & Artificial Intelligence & Softwa
    20245thInternationalConferenceonBigData&ArtificialIntelligence&SoftwareEngineeringhttp://www.icbase.org/截稿日期:2024-09-13通知日期:2024-09-13会议日期:2024-09-20会议地点:Wenzhou,China届数:5征稿:The20245thInternationalConferenceonBigD......
  • Machine Learning and Artifcial Intelligence -2nd Edition(人工智能与机器学习第二版
    #《人工智能和机器学习》由AmeetV.Joshi撰写,是一本关于人工智能(AI)和机器学习(ML)的综合性教材,旨在为学生和专业人士提供基础理论、算法和实际应用的全面指导。这本书分为七个部分,涵盖了从基础概念到高级应用的广泛内容。#内容结构PartI:Introduction本部分介绍了人工智......
  • 【论文翻译】DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in C
    本翻译来自大模型翻译,如有不对的地方,敬请谅解引言开源社区通过开发诸如StarCoder(Li等人,2023b;Lozhkov等人,2024)、CodeLlama(Roziere等人,2023)、DeepSeek-Coder(Guo等人,2024)和Codestral(MistralAI,2024)等开源代码模型,在推进代码智能方面取得了显著进展。这些模型的性能已稳步接近......
  • DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intellig
    DeepSeek-Coder-V2:BreakingtheBarrierofClosed-SourceModelsinCodeIntelligence相关链接:arxivgithub关键字:开源、代码智能、混合专家模型(MoE)、编程语言支持、上下文长度扩展摘要我们介绍了DeepSeek-Coder-V2,这是一个开源的混合专家(MoE)代码语言模型,其性......