首页 > 其他分享 >Fast Training Algorithms for Deep Convolutional Fuzzy Systems With Application to Stock Index Predic

Fast Training Algorithms for Deep Convolutional Fuzzy Systems With Application to Stock Index Predic

时间:2024-04-27 11:44:06浏览次数:31  
标签:Convolutional Index Training mm xx zeros np wmdeepyy coefficients

类似深度卷积神经网络DCNN,模糊系统领域有个深度卷积模糊系统 deep convolutional fuzzy system (DCFS),每一层都是一个模糊系统,上一层的输出是下一层的输入。
这篇论文目的是加速DCFS的计算速度,解决可解释性
1990年提出,也用反向传播训练
DCFS受困于低维度小数据集,大数据量时计算负担太重。模糊系统的参数有物理意义,用Wang–Mendel加速计算(作者的成名作)。


Section II, 介绍DCFS结构细节

输入高维,输出标量(每层都是标量,最终也是个标量) (多输出的DCFS可能设计成多个单输出的DCFS)
低维输入(x)通过卷积窗口(长为m,比如3,4或5)挑选给集合(比如挑3个x进入I,I里有三个输入),I再输入给FS
输入进入FS后,FS中有q个模糊集和对应的隶属函数A,还有模糊规则if then,如果输入x得到对应的A,那A经过if then 得到最终输出y,FC的唯一的系数c就是要训练的
所以对DCFS的解释就是:输入x1,x2,x3等等,得到一个系数c,每个FC都能看作一个系统,这样就能轻松排查问题(讲完第三部分会具体讲怎么排查)
作者给出了FS的计算公式
降低DCFS计算量和存储量的方法是参数共享,同一层的FS完全相同。

Section III, DCFS的四种训练算法

如何离线训练DCFS,让输入输出对能匹配上。
隶属函数A要根据所有输入对的所有x的最值设置,以确保能包住

离线训练时参数共享咋用 共享的方式没原始方法准确率高
如何在线训练DCFS
在线训练时参数共享咋用 在线训练也是共享方法不行

Section IV, 用DCFS模型和它的训练算法取预测股票指数
下一刻预测值r(t),当前时刻是t-1,过去的值是r(t-1),r(t-2)...r(t-n),用过去这n个值,代入DCFS,预测r(t) 这就是输入输出对
过去的r就是过去每天的收盘价。
训练集共3000个数据,前2000训练,后1000预测
每次放11个值进DCFS,3个组成一个I给FS,DCFS共5层,每层FS个数分别为9(11-3,11个数分成9组了)、7、5、3、1
我们尝试不同的模糊集个数,发现20个模糊集用Wang–Mendel算法速度快,错误率低,而用反向传播算法慢10倍,错误率高20%
用恒生指数发现q=10效果最好,但整体更难预测

交易策略:DCFS预测的r(t)>0,说明明天要涨,今天t-1买。 小于0则卖
作者说的两个value我用不着,是用来预测指数实际值和指数净值的

为了优化策略,创建了个两层网络,用一个指数和4个权重股,一起输入给顶层FS做预测

训练算法的代码MATLAB实现了

 

  1 import numpy as np
  2 import pickle
  3 import matplotlib.pyplot as plt
  4 
  5 
  6 # 定义 meb2 函数,用于计算隶属度
  7 def meb2(n, i, x, xmin, xmax):
  8     h = (xmax - xmin) / (n - 1)
  9     res = 0
 10 
 11     if i == 1:
 12         if x < xmin:
 13             res = 1
 14         elif xmin <= x < xmin + h:
 15             res = (xmin - x + h) / h
 16         elif x >= xmin + h:
 17             res = 0
 18     elif 1 < i < n:
 19         if x < xmin + (i - 2) * h or x > xmin + i * h:
 20             res = 0
 21         elif xmin + (i - 2) * h <= x < xmin + (i - 1) * h:
 22             res = (x - xmin - (i - 2) * h) / h
 23         elif xmin + (i - 1) * h <= x < xmin + i * h:
 24             res = (-x + xmin + i * h) / h
 25     elif i == n:
 26         if x < xmax - h:
 27             res = 0
 28         elif xmax - h <= x < xmax:
 29             res = (-xmax + x + h) / h
 30         elif x >= xmax:
 31             res = 1
 32 
 33     return res
 34 
 35 
 36 def wmdeepyy(mm, zb, ranges, xx):
 37     # 计算模糊系统输出
 38     numSamples, m = xx.shape
 39     numInput = m
 40     fnCounts = np.full(numInput, mm)
 41     activFns = np.zeros((numInput, 2), dtype=int)
 42     activGrades = np.zeros((numInput, 2))
 43     baseCount = np.zeros(numInput + 1, dtype=int)
 44     baseCount[0] = 1
 45     ma = np.zeros((2 ** numInput, numInput), dtype=int)
 46 
 47     for i in range(1, numInput):
 48         baseCount[i] = np.prod(fnCounts[numInput - i:numInput])  # 计算所有元素的乘积
 49 
 50     for j in range(numInput):
 51         for i1 in range(1, 2 ** (j + 1)):
 52             for i2 in range(1, 2 ** (numInput - j) + 1):
 53                 ma[i2 + (i1 - 1) * 2 ** (numInput - j), j] = (i1 - 1) % 2 + 1
 54 
 55     yy = np.zeros(numSamples)
 56     for k in range(numSamples):
 57         for i in range(numInput):
 58             numFns = fnCounts[i]
 59             nthActive = 0
 60             for nthFn in range(1, numFns + 1):
 61                 grade = meb2(numFns, nthFn, xx[k, i], ranges[i, 0], ranges[i, 1])
 62                 if grade > 0:
 63                     activFns[i, nthActive] = nthFn
 64                     activGrades[i, nthActive] = grade
 65                     nthActive += 1
 66 
 67         for i in range(numInput):
 68             nn = np.zeros(2, dtype=int)
 69             nn[0] = activFns[i, 0]
 70             nn[1] = nn[0] + 1
 71             if nn[0] == fnCounts[i]:
 72                 nn[1] = nn[0]
 73 
 74         a, b = 0, 0
 75         for i in range(1, 2 ** numInput + 1):
 76             indexcell = 1
 77             grade = 1
 78             for j in range(numInput):
 79                 grade *= activGrades[j, ma[i - 1, j]]
 80                 indexcell += (nn[numInput - j, ma[i - 1, numInput - j]] - 1) * baseCount[j]
 81             a += zb[indexcell - 1] * grade
 82             b += grade
 83 
 84         yy[k] = a / b
 85 
 86     return yy
 87 
 88 
 89 def wmdeepzb(mm, xx, y):
 90     extra = np.concatenate((xx, y.reshape(-1, 1)), axis=1)
 91     num_samples, m = extra.shape
 92     num_input = m - 1
 93     fn_counts = np.full(num_input, mm, dtype=int)
 94     ranges = np.zeros((num_input, 2))
 95     activ_fns = np.zeros((num_input, 2), dtype=int)
 96     activ_grades = np.zeros((num_input, 2))
 97     search_path = np.zeros((num_input, 2), dtype=int)
 98     num_cells = 1  # number of regions (cells)
 99 
100     # Calculate ranges and numCells
101     for i in range(num_input):
102         ranges[i] = [np.min(extra[:, i]), np.max(extra[:, i])]
103         num_cells *= fn_counts[i]
104 
105     base_count = np.cumprod(fn_counts[::-1])[::-1]
106     base_count = base_count - 1
107     zb = np.zeros(num_cells)
108     ym = np.zeros(num_cells)
109 
110     # Generate rules for cells covered by data
111     for k in range(num_samples):
112         for i in range(num_input):
113             num_fns = fn_counts[i]
114             nth_active = 0
115             for nth_fn in range(1, num_fns + 1):
116                 grade = meb2(num_fns, nth_fn, extra[k, i], ranges[i][0], ranges[i][1])
117                 if grade > 0:
118                     activ_fns[i, nth_active] = nth_fn
119                     activ_grades[i, nth_active] = grade
120                     nth_active += 1
121 
122         for i in range(num_input):
123             if activ_grades[i, 0] >= activ_grades[i, 1]:
124                 search_path[i] = [activ_fns[i, 0], activ_grades[i, 0]]
125             else:
126                 search_path[i] = [activ_fns[i, 1], activ_grades[i, 1]]
127 
128         index_cell = 1
129         grade = 1
130         for i in range(num_input):
131             grade *= search_path[i][1]
132             index_cell += (search_path[num_input - i - 1][0] - 1) * base_count[i]
133 
134         if 0 <= int(index_cell) < len(ym):
135             ym[int(index_cell)] += grade
136             zb[int(index_cell)] += extra[k, -1] * grade
137         else:
138             print(f"Warning: index_cell value {index_cell} is out of bounds.")
139 
140 
141     # Normalize zb
142     for j in range(num_cells):
143         if ym[j] != 0:
144             zb[j] /= ym[j]
145 
146     # Extrapolate the rules to all the cells
147     zbb = np.zeros(num_cells)
148     ymm = np.zeros(num_cells)
149     ct = 1
150     while ct > 0:
151         ct = 0
152         for s in range(num_cells):
153             if ym[s] == 0:
154                 ct += 1
155 
156         for s in range(num_cells):
157             if ym[s] == 0:
158                 s1 = s
159                 index = np.ones(num_input, dtype=int)
160                 for i in range(num_input - 1, -1, -1):
161                     while s1 > base_count[i]:
162                         s1 -= base_count[i]
163                         index[i] += 1
164 
165                 index[num_input - 1] = s1
166                 zbnum = 0
167 
168                 for i in range(num_input - 1):
169                     if index[i] > 1:
170                         zbb[s] += zb[s - base_count[i]]
171                         ymm[s] += ym[s - base_count[i]]
172                         zbnum += np.sign(ym[s - base_count[i]])
173 
174                     if index[i] < fn_counts[i]:
175                         zbb[s] += zb[s + base_count[i]]
176                         ymm[s] += ym[s + base_count[i]]
177                         zbnum += np.sign(ym[s + base_count[i]])
178 
179                 if index[num_input - 1] > 1:
180                     zbb[s] += zb[s - 1]
181                     ymm[s] += ym[s - 1]
182                     zbnum += np.sign(ym[s - 1])
183 
184                 if index[num_input - 1] < fn_counts[num_input - 1]:
185                     zbb[s] += zb[s + 1]
186                     ymm[s] += ym[s + 1]
187                     zbnum += np.sign(ym[s + 1])
188 
189                 if zbnum >= 1:
190                     zbb[s] /= zbnum
191                     ymm[s] /= zbnum
192 
193         # Update zb and ym for cells without data
194         for s in range(num_cells):
195             if ym[s] == 0 and ymm[s] != 0:
196                 zb[s] = zbb[s]
197                 ym[s] = ymm[s]
198 
199     return zb, ranges
200 
201 
202 def load_data():
203     # 设置随机种子以获得可重复的结果
204     np.random.seed(0)
205 
206     # 初始化时间序列的初始条件
207     p0 = np.zeros(3150)
208     for k in range(1, 51):
209         p0[k-1] = 0.04 * (k - 1)
210 
211     for k in range(51, 3150):
212         p0[k-1] = 0.9 * p0[k-2] + 0.2 * p0[k-51] / (1 + p0[k-51]**10)
213 
214     # 生成带有噪声的时间序列数据
215     r0 = np.zeros(3100)
216     for k in range(50, 3150):
217         r0[k-50] = np.log(p0[k-1] / p0[k-2]) + 0.0001 * np.random.randn()
218 
219     # 重新排列数据以创建数据矩阵xx
220     xx = np.zeros((3000, 12))
221     for i in range(3000):
222         for j in range(12):
223             xx[i, j] = r0[i + j]
224 
225     # 保存数据集xx到磁盘(如果需要)
226     # np.save('xx.npy', xx)
227     return xx
228 
229 '''
230 代码段用于加载数据集xx,该数据集是一个N×12的矩阵,其中前11列是输入数据,最后一列是输出数据。
231 代码进一步划分了训练集和测试集,设置了模糊集合的数量mm,并准备了训练数据以供9个模糊系统使用。
232 最后,代码训练了这些模糊系统,并得到了每个系统的zb和rg参数。
233 '''
234 # 假设xx已经被加载到NumPy数组中
235 # xx = np.load('xx.npy')
236 xx = load_data()
237 N = xx.shape[0]  # 获取数据集的大小
238 ntrain = int(N * 2 / 3)  # 划分训练数据和测试数据
239 mm = 20  # 设置模糊集合的数量
240 
241 
242 def train(xx):
243     # 初始化训练和测试数据集
244     x11 = np.zeros((ntrain, 3))
245     x12 = np.zeros((ntrain, 3))
246     x13 = np.zeros((ntrain, 3))
247     x14 = np.zeros((ntrain, 3))
248     x15 = np.zeros((ntrain, 3))
249     x16 = np.zeros((ntrain, 3))
250     x17 = np.zeros((ntrain, 3))
251     x18 = np.zeros((ntrain, 3))
252     x19 = np.zeros((ntrain, 3))
253     y = np.zeros((ntrain, 1))
254 
255     # 填充训练数据集
256     for i in range(ntrain):
257         for j in range(3):  # window size=3
258             x11[i, j] = xx[i, j]
259             x12[i, j] = xx[i, j+1]
260             x13[i, j] = xx[i, j+2]
261             x14[i, j] = xx[i, j+3]
262             x15[i, j] = xx[i, j+4]
263             x16[i, j] = xx[i, j+5]
264             x17[i, j] = xx[i, j+6]
265             x18[i, j] = xx[i, j+7]
266             x19[i, j] = xx[i, j+8]
267         y[i] = xx[i, 11]  # 目标输出
268 
269     # 训练9个模糊系统
270     zb11, rg11 = wmdeepzb(mm, x11, y)
271     zb12, rg12 = wmdeepzb(mm, x12, y)
272     zb13, rg13 = wmdeepzb(mm, x13, y)
273     zb14, rg14 = wmdeepzb(mm, x14, y)
274     zb15, rg15 = wmdeepzb(mm, x15, y)
275     zb16, rg16 = wmdeepzb(mm, x16, y)
276     zb17, rg17 = wmdeepzb(mm, x17, y)
277     zb18, rg18 = wmdeepzb(mm, x18, y)
278     zb19, rg19 = wmdeepzb(mm, x19, y)
279 
280     print('Level 1 training done')
281 
282     # 代码段用于计算第二层的输入数据x21到x27,这些数据是基于第一层9个模糊系统的输出以及部分输入数据生成的。接着,代码训练了第二层的7个模糊系统。
283     # 初始化第二层的输入数据
284     x21 = np.zeros((ntrain, 3))
285     x22 = np.zeros((ntrain, 3))
286     x23 = np.zeros((ntrain, 3))
287     x24 = np.zeros((ntrain, 3))
288     x25 = np.zeros((ntrain, 3))
289     x26 = np.zeros((ntrain, 3))
290     x27 = np.zeros((ntrain, 3))
291 
292     # 计算第二层的输入数据
293     x21[:, 0] = wmdeepyy(mm, zb11, rg11, x11)
294     x21[:, 1] = wmdeepyy(mm, zb12, rg12, x12)
295     x21[:, 2] = wmdeepyy(mm, zb13, rg13, x13)
296 
297     x22[:, 0] = x21[:, 1]
298     x22[:, 1] = x21[:, 2]
299     x22[:, 2] = wmdeepyy(mm, zb14, rg14, x14)
300 
301     x23[:, 0] = x21[:, 2]
302     x23[:, 1] = x22[:, 2]
303     x23[:, 2] = wmdeepyy(mm, zb15, rg15, x15)
304 
305     x24[:, 0] = x22[:, 2]
306     x24[:, 1] = x23[:, 2]
307     x24[:, 2] = wmdeepyy(mm, zb16, rg16, x16)
308 
309     x25[:, 0] = x23[:, 2]
310     x25[:, 1] = x24[:, 2]
311     x25[:, 2] = wmdeepyy(mm, zb17, rg17, x17)
312 
313     x26[:, 0] = x24[:, 2]
314     x26[:, 1] = x25[:, 2]
315     x26[:, 2] = wmdeepyy(mm, zb18, rg18, x18)
316 
317     x27[:, 0] = x25[:, 2]
318     x27[:, 1] = x26[:, 2]
319     x27[:, 2] = wmdeepyy(mm, zb19, rg19, x19)
320 
321     # 训练第二层的7个模糊系统
322     zb21, rg21 = wmdeepzb(mm, x21, y)
323     zb22, rg22 = wmdeepzb(mm, x22, y)
324     zb23, rg23 = wmdeepzb(mm, x23, y)
325     zb24, rg24 = wmdeepzb(mm, x24, y)
326     zb25, rg25 = wmdeepzb(mm, x25, y)
327     zb26, rg26 = wmdeepzb(mm, x26, y)
328     zb27, rg27 = wmdeepzb(mm, x27, y)
329 
330     print('Level 2 training done')
331 
332     # 代码段用于计算第三层的输入数据x31到x35,这些数据是基于第二层模糊系统的输出生成的。接着,代码训练了第三层的5个模糊系统。
333     # 初始化第三层的输入数据
334     x31 = np.zeros((ntrain, 3))
335     x32 = np.zeros((ntrain, 3))
336     x33 = np.zeros((ntrain, 3))
337     x34 = np.zeros((ntrain, 3))
338     x35 = np.zeros((ntrain, 3))
339 
340     # 计算第三层的输入数据
341     x31[:, 0] = wmdeepyy(mm, zb21, rg21, x21)
342     x31[:, 1] = wmdeepyy(mm, zb22, rg22, x22)
343     x31[:, 2] = wmdeepyy(mm, zb23, rg23, x23)
344 
345     x32[:, 0] = x31[:, 1]
346     x32[:, 1] = x31[:, 2]
347     x32[:, 2] = wmdeepyy(mm, zb24, rg24, x24)
348 
349     x33[:, 0] = x31[:, 2]
350     x33[:, 1] = x32[:, 2]
351     x33[:, 2] = wmdeepyy(mm, zb25, rg25, x25)
352 
353     x34[:, 0] = x32[:, 2]
354     x34[:, 1] = x33[:, 2]
355     x34[:, 2] = wmdeepyy(mm, zb26, rg26, x26)
356 
357     x35[:, 0] = x33[:, 2]
358     x35[:, 1] = x34[:, 2]
359     x35[:, 2] = wmdeepyy(mm, zb27, rg27, x27)
360 
361     # 训练第三层的5个模糊系统
362     zb31, rg31 = wmdeepzb(mm, x31, y)
363     zb32, rg32 = wmdeepzb(mm, x32, y)
364     zb33, rg33 = wmdeepzb(mm, x33, y)
365     zb34, rg34 = wmdeepzb(mm, x34, y)
366     zb35, rg35 = wmdeepzb(mm, x35, y)
367 
368     print('Level 3 training done')
369 
370     # 代码段用于计算第四层的输入数据x41、x42和x43,这些数据是基于第三层模糊系统的输出生成的。接着,代码训练了第四层的3个模糊系统。
371     # 初始化第四层的输入数据
372     x41 = np.zeros((ntrain, 3))
373     x42 = np.zeros((ntrain, 3))
374     x43 = np.zeros((ntrain, 3))
375 
376     # 计算第四层的输入数据
377     x41[:, 0] = wmdeepyy(mm, zb31, rg31, x31)
378     x41[:, 1] = wmdeepyy(mm, zb32, rg32, x32)
379     x41[:, 2] = wmdeepyy(mm, zb33, rg33, x33)
380 
381     x42[:, 0] = x41[:, 1]
382     x42[:, 1] = x41[:, 2]
383     x42[:, 2] = wmdeepyy(mm, zb34, rg34, x34)
384 
385     x43[:, 0] = x41[:, 2]
386     x43[:, 1] = x42[:, 2]
387     x43[:, 2] = wmdeepyy(mm, zb35, rg35, x35)
388 
389     # 训练第四层的3个模糊系统
390     zb41, rg41 = wmdeepzb(mm, x41, y)
391     zb42, rg42 = wmdeepzb(mm, x42, y)
392     zb43, rg43 = wmdeepzb(mm, x43, y)
393 
394     print('Level 4 training done')
395 
396     # 代码段用于计算第五层的输入数据x51,这些数据是基于第四层模糊系统的输出生成的。接着,代码训练了第五层的模糊系统。
397     # 初始化第五层的输入数据
398     x51 = np.zeros((ntrain, 3))
399 
400     # 计算第五层的输入数据
401     x51[:, 0] = wmdeepyy(mm, zb41, rg41, x41)
402     x51[:, 1] = wmdeepyy(mm, zb42, rg42, x42)
403     x51[:, 2] = wmdeepyy(mm, zb43, rg43, x43)
404 
405     # 训练第五层的模糊系统
406     zb51, rg51 = wmdeepzb(mm, x51, y)
407 
408     print('Level 5 training done')
409     print('Training done')
410 
411     # 创建一个字典来保存所有的系数
412     coefficients = {
413         'zb11': zb11, 'rg11': rg11,
414         'zb12': zb12, 'rg12': rg12,
415         'zb13': zb13, 'rg13': rg13,
416         'zb14': zb14, 'rg14': rg14,
417         'zb15': zb15, 'rg15': rg15,
418         'zb16': zb16, 'rg16': rg16,
419         'zb17': zb17, 'rg17': rg17,
420         'zb18': zb18, 'rg18': rg18,
421         'zb19': zb19, 'rg19': rg19,
422         'zb21': zb21, 'rg21': rg21,
423         'zb22': zb22, 'rg22': rg22,
424         'zb23': zb23, 'rg23': rg23,
425         'zb24': zb24, 'rg24': rg24,
426         'zb25': zb25, 'rg25': rg25,
427         'zb26': zb26, 'rg26': rg26,
428         'zb27': zb27, 'rg27': rg27,
429         'zb31': zb31, 'rg31': rg31,
430         'zb32': zb32, 'rg32': rg32,
431         'zb33': zb33, 'rg33': rg33,
432         'zb34': zb34, 'rg34': rg34,
433         'zb35': zb35, 'rg35': rg35,
434         'zb41': zb41, 'rg41': rg41,
435         'zb42': zb42, 'rg42': rg42,
436         'zb43': zb43, 'rg43': rg43,
437         'zb51': zb51, 'rg51': rg51
438     }
439 
440     # 将字典保存到文件
441     with open('fuzzy_system_coefficients.pkl', 'wb') as file:
442         pickle.dump(coefficients, file)
443 
444     print('Coefficients have been saved to fuzzy_system_coefficients.pkl')
445 
446     return y
447 
448 
449 def predict():
450     # 加载系数
451     with open('fuzzy_system_coefficients.pkl', 'rb') as file:
452         coefficients = pickle.load(file)
453     # 现在可以访问每个系数,例如:
454     zb11 = coefficients['zb11']
455     rg11 = coefficients['rg11']
456     zb12 = coefficients['zb12']
457     rg12 = coefficients['rg12']
458     zb13 = coefficients['zb13']
459     rg13 = coefficients['rg13']
460     zb14 = coefficients['zb14']
461     rg14 = coefficients['rg14']
462     zb15 = coefficients['zb15']
463     rg15 = coefficients['rg15']
464     zb16 = coefficients['zb16']
465     rg16 = coefficients['rg16']
466     zb17 = coefficients['zb17']
467     rg17 = coefficients['rg17']
468     zb18 = coefficients['zb18']
469     rg18 = coefficients['rg18']
470     zb19 = coefficients['zb19']
471     rg19 = coefficients['rg19']
472     zb21 = coefficients['zb21']
473     rg21 = coefficients['rg21']
474     zb22 = coefficients['zb22']
475     rg22 = coefficients['rg22']
476     zb23 = coefficients['zb23']
477     rg23 = coefficients['rg23']
478     zb24 = coefficients['zb24']
479     rg24 = coefficients['rg24']
480     zb25 = coefficients['zb25']
481     rg25 = coefficients['rg25']
482     zb26 = coefficients['zb26']
483     rg26 = coefficients['rg26']
484     zb27 = coefficients['zb27']
485     rg27 = coefficients['rg27']
486     zb31 = coefficients['zb31']
487     rg31 = coefficients['rg31']
488     zb32 = coefficients['zb32']
489     rg32 = coefficients['rg32']
490     zb33 = coefficients['zb33']
491     rg33 = coefficients['rg33']
492     zb34 = coefficients['zb34']
493     rg34 = coefficients['rg34']
494     zb35 = coefficients['zb35']
495     rg35 = coefficients['rg35']
496     zb41 = coefficients['zb41']
497     rg41 = coefficients['rg41']
498     zb42 = coefficients['zb42']
499     rg42 = coefficients['rg42']
500     zb43 = coefficients['zb43']
501     rg43 = coefficients['rg43']
502     zb51 = coefficients['zb51']
503     rg51 = coefficients['rg51']
504 
505     mm = 20  # 设置模糊集合的数量
506 
507     # 初始化各级的输入输出数据
508     N = xx.shape[0]
509     x11 = np.zeros((N, 3))
510     x12 = np.zeros((N, 3))
511     x13 = np.zeros((N, 3))
512     x14 = np.zeros((N, 3))
513     x15 = np.zeros((N, 3))
514     x16 = np.zeros((N, 3))
515     x17 = np.zeros((N, 3))
516     x18 = np.zeros((N, 3))
517     x19 = np.zeros((N, 3))
518     y = np.zeros((N, 1))
519 
520     # 填充第一层输入数据
521     for i in range(N):
522         for j in range(3):
523             x11[i, j] = xx[i, j]
524             x12[i, j] = xx[i, j+1]
525             x13[i, j] = xx[i, j+2]
526             x14[i, j] = xx[i, j+3]
527             x15[i, j] = xx[i, j+4]
528             x16[i, j] = xx[i, j+5]
529             x17[i, j] = xx[i, j+6]
530             x18[i, j] = xx[i, j+7]
531             x19[i, j] = xx[i, j+8]
532         y[i] = xx[i, 12]  # 目标输出
533 
534     print('Level 1 computing done')
535 
536     # 计算第一层输出
537     x21 = np.zeros((N, 3))
538     x22 = np.zeros((N, 3))
539     x23 = np.zeros((N, 3))
540     x24 = np.zeros((N, 3))
541     x25 = np.zeros((N, 3))
542     x26 = np.zeros((N, 3))
543     x27 = np.zeros((N, 3))
544 
545     x21[:, 0] = wmdeepyy(mm, zb11, rg11, x11)
546     x21[:, 1] = wmdeepyy(mm, zb12, rg12, x12)
547     x21[:, 2] = wmdeepyy(mm, zb13, rg13, x13)
548     x22[:, 0] = x21[:, 1]
549     x22[:, 1] = x21[:, 2]
550     x22[:, 2] = wmdeepyy(mm, zb14, rg14, x14)
551     x23[:, 0] = x21[:, 2]
552     x23[:, 1] = x22[:, 2]
553     x23[:, 2] = wmdeepyy(mm, zb15, rg15, x15)
554     x24[:, 0] = x22[:, 2]
555     x24[:, 1] = x23[:, 2]
556     x24[:, 2] = wmdeepyy(mm, zb16, rg16, x16)
557     x25[:, 0] = x23[:, 2]
558     x25[:, 1] = x24[:, 2]
559     x25[:, 2] = wmdeepyy(mm, zb17, rg17, x17)
560     x26[:, 0] = x24[:, 2]
561     x26[:, 1] = x25[:, 2]
562     x26[:, 2] = wmdeepyy(mm, zb18, rg18, x18)
563     x27[:, 0] = x25[:, 2]
564     x27[:, 1] = x26[:, 2]
565     x27[:, 2] = wmdeepyy(mm, zb19, rg19, x19)
566 
567     print('Level 2 computing done')
568 
569     # 计算第二层输出
570     x31 = np.zeros((N, 3))
571     x32 = np.zeros((N, 3))
572     x33 = np.zeros((N, 3))
573     x34 = np.zeros((N, 3))
574     x35 = np.zeros((N, 3))
575 
576     x31[:, 0] = wmdeepyy(mm, zb21, rg21, x21)
577     x31[:, 1] = wmdeepyy(mm, zb22, rg22, x22)
578     x31[:, 2] = wmdeepyy(mm, zb23, rg23, x23)
579     x32[:, 0] = x31[:, 1]
580     x32[:, 1] = x31[:, 2]
581     x32[:, 2] = wmdeepyy(mm, zb24, rg24, x24)
582     x33[:, 0] = x31[:, 2]
583     x33[:, 1] = x32[:, 2]
584     x33[:, 2] = wmdeepyy(mm, zb25, rg25, x25)
585     x34[:, 0] = x32[:, 2]
586     x34[:, 1] = x33[:, 2]
587     x34[:, 2] = wmdeepyy(mm, zb26, rg26, x26)
588     x35[:, 0] = x33[:, 2]
589     x35[:, 1] = x34[:, 2]
590     x35[:, 2] = wmdeepyy(mm, zb27, rg27, x27)
591 
592     print('Level 3 computing done')
593 
594     # 计算第三层输出
595     x41 = np.zeros((N, 3))
596     x42 = np.zeros((N, 3))
597     x43 = np.zeros((N, 3))
598 
599     x41[:, 0] = wmdeepyy(mm, zb31, rg31, x31)
600     x41[:, 1] = wmdeepyy(mm, zb32, rg32, x32)
601     x41[:, 2] = wmdeepyy(mm, zb33, rg33, x33)
602     x42[:, 0] = x41[:, 1]
603     x42[:, 1] = x41[:, 2]
604     x42[:, 2] = wmdeepyy(mm, zb34, rg34, x34)
605     x43[:, 0] = x41[:, 2]
606     x43[:, 1] = x42[:, 2]
607     x43[:, 2] = wmdeepyy(mm, zb35, rg35, x35)
608 
609     print('Level 4 computing done')
610 
611     # 计算第四层输出
612     x51 = np.zeros((N, 3))
613 
614     x51[:, 0] = wmdeepyy(mm, zb41, rg41, x41)
615     x51[:, 1] = wmdeepyy(mm, zb42, rg42, x42)
616     x51[:, 2] = wmdeepyy(mm, zb43, rg43, x43)
617 
618     print('Level 5 computing done')
619 
620     # 计算DCFS的最终输出
621     yy = np.zeros((N, 1))
622     yy = wmdeepyy(mm, zb51, rg51, x51)
623 
624     return yy
625 
626 
627 y = train(xx)  # 训练好的参数保存到本地,训练集的输出
628 yy = predict()  # 预测集的输出
629 # 计算所有点的误差
630 e = y - yy
631 
632 # 计算训练误差
633 err_train = np.sqrt(np.sum(e[:ntrain] ** 2) / ntrain)
634 
635 # 计算测试误差
636 err_test = np.sqrt(np.sum(e[ntrain:] ** 2) / (N - ntrain))
637 
638 # 绘制误差曲线
639 plt.figure(figsize=(10, 6))
640 plt.plot(e, label='DCFS Error')
641 plt.title(f'DCFS: Training error = {err_train:.4f}, Testing error = {err_test:.4f}')
642 plt.xlabel('Data Point')
643 plt.ylabel('Error')
644 plt.legend()
645 plt.grid(True)
646 plt.show()

 

标签:Convolutional,Index,Training,mm,xx,zeros,np,wmdeepyy,coefficients
From: https://www.cnblogs.com/zhaot1993/p/18161886

相关文章

  • 在数据库的查询与更新中,CHARINDEX与instr的区别?
    在数据库和字符串处理的领域中,CHARINDEX和INSTR是两个常用的函数,它们都用于查找子字符串在主字符串中的位置。尽管这两个函数在功能上有所重叠,但它们之间存在一些关键的区别,这些区别可能会影响开发者在选择使用哪一个函数时的决策。首先,CHARINDEX是SQLServer中的一个内置函数,它......
  • hitcontraining_heapcreator
    [BUUCTF]hitcontraining_heapcreatorUAF|Off-By-One|堆溢出对应libc版本libc6_2.23-0ubuntu9_amd64[*]'/home/bamuwe/heapcreator/heapcreator'Arch:amd64-64-littleRELRO:PartialRELROStack:CanaryfoundNX:NXenabled......
  • inode(index node)是Unix、Linux和类Unix操作系统中的一个重要概念, 在Windows操作系统中
    inode(indexnode)是Unix、Linux和类Unix操作系统中的一个重要概念,用于描述文件系统中的文件或目录。每个文件或目录都与一个inode相关联。inode包含以下信息:文件或目录的权限(读、写、执行)。文件类型(普通文件、目录、符号链接等)。拥有者和所属组。文件的大小。访问、修......
  • Enhancing ID and Text Fusion via Alternative Training in Session-based Recommend
    目录概MotivationAlterRec代码LiJ.,HanH.,ChenZ.,ShomerH.,JinW.,JavariA.andTangJ.EnhancingIDandtextfusionviaalternativetraininginsession-basedrecommendation.2024.概作者“发现”多模态推荐中ID和文本模态的结合做的并不好,于是乎提出......
  • index join SLCT 条件过滤性好,怎么将它下推?
    sqlselect--/*+enable_index_filter(1)enable_hash_join(0)use_nl_with_index(authallrea2_,IDX_DM_20236274)*/--/*+no_semi_gen_crossHI_RIGHT_ORDER_FLAG(2)ADAPTIVE_NPLN_FLAG(3)USE_INDEX_SKIP_SCAN(1)*/citicscoma0_.fd_idasfd1_88_,citi......
  • vue v-for中key的作用,使用index作为key会怎么样?
    原理其主要的目的就是优化性能。vue在更新dom时会比较key值相同的元素内容是否发生改变,如果不变则不更新页面,这样可以使得尽可能减少页面的更新,提高性能。假如我渲染3个元素,不设置key值,即默认策略应该是标识为index,即0,1,2。假如我在第一个元素后加一个元素,则实际上原先的bc......
  • typescript安装问题=> for (let i = startIndex ?? 0; i < array.length; i++) {
    for(leti=startIndex??0;i<array.length;i++){^SyntaxError:Unexpectedtoken?atObject.exports.runInThisContext(vm.js:76:16)atModule._compile(module.js:542:28)atObject.Module._extensions..js(mo......
  • v-for 一定要绑定key值吗?为什么不建议使用index?
    在vue进行循环的数组或者对象中,使用了v-for进行dom元素的渲染。当数组或对象中的值发生变化时,可能会使dom元素重新渲染。是否会重新渲染和我们设置的key属性对应的值有关合理的设置key属性的值可以有效的提高页面的的更新效率首先,vue使用了diff算法来进行dom元素的更新,diff算......
  • TypeScript 中 Type 'typeof globalThis' has no index signature 错误解决
    TypeScript中Type'typeofglobalThis'hasnoindexsignature错误解决当我们尝试访问 global 对象上不存在的属性时,会出现错误“Elementimplicitlyhasan'any'typebecausetype'typeofglobalThis'hasnoindexsignature”。要解决此错误,需要扩展全局对象并为必......
  • LlamaIndex 是什么
     LlamaIndex是一个基于LLM(大语言模型)的应用程序数据框架,适用于受益于上下文增强的场景。这类LLM系统被称为RAG(检索增强生成)系统。LlamaIndex提供了必要的抽象层,以便更容易地摄取、结构化和访问私有或特定领域的数据,从而安全可靠地将这些数据注入LLM中,以实现更准确的文......