首页 > 其他分享 >Basics of Neural Network Programming

Basics of Neural Network Programming

时间:2023-05-27 17:34:19浏览次数:41  
标签:function Basics Network ell Programming cfrac hat log

目录

Basics of Neural Network Programming

Logistic Regression

given x , want \(\hat{y}=P(y=1|x)\), \(x\in\R^{n_x}\)

\(\hat{y_1}=w_{11}*x_{11}+w_{12}*x_{12}+\dots+w_{1n_x}*x_{1n_x}+b_1\).

Parameters: \(w\in\R^{n_x}, b\in\R\)

Output: \(\hat{y}=\sigma(w^T+b),\ \ \ \hat{y}\in(0,1)\).

\(\sigma(z)=\cfrac{1}{1+e^{-z}}\).

Loss(error) function:

\(\ell(\hat{y},y)=\cfrac{1}{2}(\hat{y}-y)^2\), \(\ell(\hat{y},y)=-(y\log\hat{y}+(1-y)\log(1-\hat{y}))\).

\(\log\lrArr \ln\).

For the second function, if \(y=1,\ell(\hat{y},y)=-y\log \hat{y}\) and you want loss function close 0, \(\hat{y}\) must be as big as possible. As we all know \(\hat{y}\in(0,1)\), so when \(y=1.\ \ell(\hat{y},y)\rarr 0\), \(\hat{y}\) will be close 1.

Cost function:

\(\begin{aligned}J(w,b)&=\cfrac{1}{m}\displaystyle\sum_{i=1}^{m}\ell(\hat{y}^{(i)},y^{(i)})\\&=-\cfrac{1}{m}\displaystyle\sum_{i=1}^{m}\bigg[y^{(i)}\log\hat{y}^{(i)}+(1-y^{(i)})\log(1-\hat{y}^{(i)})\bigg]\end{aligned}\).

Gradient descent algorithm:

\(\begin{aligned}Repeat&\{\\w :&= w-\alpha\cfrac{dJ(w)}{dw}\\&\}\end{aligned}\).

ignore parameter b: \(J(w,b)\rarr J(w)\).

\(\alpha\) : learning rate

标签:function,Basics,Network,ell,Programming,cfrac,hat,log
From: https://www.cnblogs.com/99kk/p/17437064.html

相关文章

  • 1192. Critical Connections in a Network刷题笔记
    参考这个题解,用的dfsimportcollectionsclassSolution:defcriticalConnections(self,n:int,connections:List[List[int]])->List[List[int]]:defmakeGraph(coonections):graph=collections.defaultdict(list)forconnincon......
  • error CS0246: The type or namespace name ‘NetworkManager‘ could not be found
    项目场景:之前用Unity5.x开发的项目,要升级到Unity2019问题描述:因为项目中用到了老版的Network导致升级后报错errorCS0246:Thetypeornamespacename'NetworkManager'couldnotbefound(areyoumissingausingdirectiveoranassemblyreference?)<hrstyle="border:s......
  • Paper Reading: forgeNet a graph deep neural network model using tree-based ensem
    目录研究动机文章贡献本文方法图嵌入深度前馈网络forgeNet特征重要性评估具体实现模拟实验合成数据生成实验评估实验结果真实数据应用BRCA数据集microRNA数据Healthyhumanmetabolomics数据集优点和创新点PaperReading是从个人角度进行的一些总结分享,受到个人关注点的侧重......
  • Post-Exploitation Basics
    开发后基础知识https://tryhackme.com/room/postexploit使用mimikatz、bloodhound、powerview和msfvenom学习后期开发和维护访问的基础知识介绍从使用powerview和bloodhound进行后开发枚举,使用mimikatz倾倒哈希和金票攻击,使用Windows服务器工具和日志收集基本信息......
  • 2023 Xian Jiaotong University Programming Contest
    A.大水题#include<bits/stdc++.h>#include<ext/rope>#include<ext/pb_ds/assoc_container.hpp>usingnamespacestd;usingnamespace__gnu_cxx;usingnamespace__gnu_pbds;#definefifirst#definesesecond#definelcu<<1#define......
  • Combining Label Propagation and Simple Models Out-performs Graph Neural Networks
    目录概符号说明C&S代码HuangQ.,HeH.,SinghA.,LimS.andBensonA.R.Combininglabelpropagationandsimplemodelsout-performsgraphneuralnetworks.ICLR,2021.概将预测概率作为信号进行传播.符号说明\(G=(V,E)\),图;\(|V|=n\);\(X\in\mathbb{R}......
  • [USACO08JAN]Cell Phone Network G
    题意:给出由n个点和(n-1)条边构成的树,每个点可以覆盖每个相邻点,求把树上所有点覆盖完成至少需要挑出多少点来做覆盖操作思路:先明确用树形dp来做解答,用dp[i][]来表示覆盖对应点和其下方所有节点的最小花费对于要覆盖的每个点,我们可以有三种选择:1.自己覆盖自己:这时字节......
  • 2023 Hubei Provincial Collegiate Programming Contest
    C.DarknessI首先根据短边放一条对角线,然后往后每隔一列只放一个即可。#include<bits/stdc++.h>usingnamespacestd;#defineintlonglongint32_tmain(){ios::sync_with_stdio(false),cin.tie(nullptr),cout.tie(nullptr);intn,m;cin>>n>>m......
  • 2023 CCPC Henan Provincial Collegiate Programming Contest
    A.小水獭游河南a的长度小于26,所以直接暴力枚举暴力判断。#include<bits/stdc++.h>usingnamespacestd;voidsolve(){strings;cin>>s;if(s.size()==1){cout<<"NaN\n";return;}map<char,int>cnt;......
  • 全网最详细解读《GIN-HOW POWERFUL ARE GRAPH NEURAL NETWORKS》!!!
    Abstract+IntroductionGNNs大都遵循一个递归邻居聚合的方法,经过k次迭代聚合,一个节点所表征的特征向量能够捕捉到距离其k-hop邻域的邻居节点的特征,然后还可以通过pooling获取到整个图的表征(比如将所有节点的表征向量相加后用于表示一个图表征向量)。关于邻居聚合策略以及......