首页 > 其他分享 >实验3:OpenFlow协议分析实践

实验3:OpenFlow协议分析实践

时间:2022-09-28 01:34:41浏览次数:55  
标签:struct OpenFlow 端口 实践 header ofp uint16 实验 net

基本要求

一、拓扑文件

from mininet.net import Mininet
from mininet.node import Controller, RemoteController, OVSController
from mininet.node import CPULimitedHost, Host, Node
from mininet.node import OVSKernelSwitch, UserSwitch
from mininet.node import IVSSwitch
from mininet.cli import CLI
from mininet.log import setLogLevel, info
from mininet.link import TCLink, Intf
from subprocess import call

def myNetwork():

    net = Mininet( topo=None,
                   build=False,
                   ipBase='192.168.0.0/24')

    info( '*** Adding controller\n' )
    c0=net.addController(name='c0',
                      controller=Controller,
                      protocol='tcp',
                      port=6633)

    info( '*** Add switches\n')
    s1 = net.addSwitch('s1', cls=OVSKernelSwitch)
    s2 = net.addSwitch('s2', cls=OVSKernelSwitch)

    info( '*** Add hosts\n')
    h1 = net.addHost('h1', cls=Host, ip='192.168.0.101/24', defaultRoute=None)
    h2 = net.addHost('h2', cls=Host, ip='192.168.0.102/24', defaultRoute=None)
    h3 = net.addHost('h3', cls=Host, ip='192.168.0.103/24', defaultRoute=None)
    h4 = net.addHost('h4', cls=Host, ip='192.168.0.104/24', defaultRoute=None)

    info( '*** Add links\n')
    net.addLink(h1, s1)
    net.addLink(h3, s1)
    net.addLink(h2, s2)
    net.addLink(h4, s2)
    net.addLink(s1, s2)

    info( '*** Starting network\n')
    net.build()
    info( '*** Starting controllers\n')
    for controller in net.controllers:
        controller.start()

    info( '*** Starting switches\n')
    net.get('s1').start([c0])
    net.get('s2').start([c0])

    info( '*** Post configure switches and hosts\n')
    s1.cmd('ifconfig s1 192.168.0.0/24')
    s2.cmd('ifconfig s2 192.168.0.0/24')

    CLI(net)
    net.stop()

if __name__ == '__main__':
    setLogLevel( 'info' )
    myNetwork()

pingall结果截图

Wireshark查看抓包结果

1. hello

控制器6633端口(我最高能支持Open Flow1.0) ---> 交换机35614端口

交换机35614端口(我最高能支持Open Flow1.5)---> 控制器6633端口

2. Features Request

控制器6633端口(我需要你的特征信息) ---> 交换机35614端口

3. Set Config

控制器6633端口(请按照我给你的flag和max bytes of packet进行配置) ---> 交换机35614端口

4. Port_Status

当交换机端口发生变化时,告知控制器相应的端口状态!

5. Features Reply

交换机35614端口(这是我的特征信息,请查收) ---> 控制器6633端口

6. Packet_In

交换机35614端口(有数据包进来,请指示) ---> 控制器6633端口

7. Flow_Mod

控制器6633端口(请按照我给你的action进行处理) ---> 交换机35614端口

8. Packet_Out

进阶要求

交互图

  • TCP协议

将抓包基础要求第2步的抓包结果对照OpenFlow源码,了解OpenFlow主要消息类型对应的数据结构定义

1. hello

struct ofp_header {
    uint8_t version;    /* OFP_VERSION. */
    uint8_t type;       /* One of the OFPT_ constants. */
    uint16_t length;    /* Length including this ofp_header. */
    uint32_t xid;       /* Transaction id associated with this packet.
                           Replies use the same id as was in the request
                           to facilitate pairing. */
};

struct ofp_hello {
    struct ofp_header header;
};


2. Features Request

struct ofp_header {
    uint8_t version;    /* OFP_VERSION. */
    uint8_t type;       /* One of the OFPT_ constants. */
    uint16_t length;    /* Length including this ofp_header. */
    uint32_t xid;       /* Transaction id associated with this packet.
                           Replies use the same id as was in the request
                           to facilitate pairing. */
};

struct ofp_hello {
    struct ofp_header header;
};

3. Set Config

/* Switch configuration. */
struct ofp_switch_config {
    struct ofp_header header;
    uint16_t flags;             /* OFPC_* flags. */
    uint16_t miss_send_len;     /* Max bytes of new flow that datapath should
                                   send to the controller. */
};

4. Port_Status

 /* 286 */
/* A physical port has changed in the datapath */
struct ofp_port_status {
    struct ofp_header header;
    uint8_t reason;          /* One of OFPPR_*. */
    uint8_t pad[7];          /* Align to 64-bits. */
    struct ofp_phy_port desc;
};

5. Features Reply

 /* 256 */
struct ofp_switch_features {
    struct ofp_header header;
    uint64_t datapath_id;   /* Datapath unique ID.  The lower 48-bits are for
                               a MAC address, while the upper 16-bits are
                               implementer-defined. */
 
    uint32_t n_buffers;     /* Max packets buffered at once. */
 
    uint8_t n_tables;       /* Number of tables supported by datapath. */
    uint8_t pad[3];         /* Align to 64-bits. */
 
    /* Features. */
    uint32_t capabilities;  /* Bitmap of support "ofp_capabilities". */
    uint32_t actions;       /* Bitmap of supported "ofp_action_type"s. */
 
    /* Port info.*/
    struct ofp_phy_port ports[0];  /* Port definitions.  The number of ports
                                      is inferred from the length field in
                                      the header. */
};

6. Packet_In

/* Why is this packet being sent to the controller? */
enum ofp_packet_in_reason {
    OFPR_NO_MATCH,          /* No matching flow. */
    OFPR_ACTION             /* Action explicitly output to controller. */
};
 
/* Packet received on port (datapath -> controller). */
struct ofp_packet_in {
    struct ofp_header header;
    uint32_t buffer_id;     /* ID assigned by datapath. */
    uint16_t total_len;     /* Full length of frame. */
    uint16_t in_port;       /* Port on which frame was received. */
    uint8_t reason;         /* Reason packet is being sent (one of OFPR_*) */
    uint8_t pad;
    uint8_t data[0];        /* Ethernet frame, halfway through 32-bit word,
                               so the IP header is 32-bit aligned.  The
                               amount of data is inferred from the length
                               field in the header.  Because of padding,
                               offsetof(struct ofp_packet_in, data) ==
                               sizeof(struct ofp_packet_in) - 2. */
};

7. Flow_Mod

struct ofp_flow_mod {
    struct ofp_header header;
    struct ofp_match match;      /* Fields to match */
    uint64_t cookie;             /* Opaque controller-issued identifier. */

    /* Flow actions. */
    uint16_t command;             /* One of OFPFC_*. */
    uint16_t idle_timeout;        /* Idle time before discarding (seconds). */
    uint16_t hard_timeout;        /* Max time before discarding (seconds). */
    uint16_t priority;            /* Priority level of flow entry. */
    uint32_t buffer_id;           /* Buffered packet to apply to (or -1).
                                     Not meaningful for OFPFC_DELETE*. */
    uint16_t out_port;            /* For OFPFC_DELETE* commands, require
                                     matching entries to include this as an
                                     output port.  A value of OFPP_NONE
                                     indicates no restriction. */
    uint16_t flags;               /* One of OFPFF_*. */
    struct ofp_action_header actions[0]; /* The action length is inferred
                                            from the length field in the
                                            header. */

};
struct ofp_action_header {
    uint16_t type;                  /* One of OFPAT_*. */
    uint16_t len;                   /* Length of action, including this
                                       header.  This is the length of action,
                                       including any padding to make it
                                       64-bit aligned. */
    uint8_t pad[4];
};

8. Packet_Out

/* Send packet (controller -> datapath). */
struct ofp_packet_out {
    struct ofp_header header;
    uint32_t buffer_id;           /* ID assigned by datapath (-1 if none). */
    uint16_t in_port;             /* Packet's input port (OFPP_NONE if none). */
    uint16_t actions_len;         /* Size of action array in bytes. */
    struct ofp_action_header actions[0]; /* Actions. */
    /* uint8_t data[0]; */        /* Packet data.  The length is inferred
                                     from the length field in the header.
                                     (Only meaningful if buffer_id == -1.) */
};

总结

本次实验中遇到的问题,抓包的时候找不到Flow_Mod数据包,后来经过反复百度之后,发现需要在pingall后才有Flow_Mod数据包,然后就是wireshark抓包要在构建拓扑前,这样才不会出错;通过本次实验我认识学习到了无论做什么事情都要慢慢来,稳一点,不然容易忽视掉细节;也学习了wireshark的使用,包括抓包、过滤数据包、查看包解析数据等,认识了解了OpenFlow协议的内容,对于如何进行通信有了更加深层次的理解,了解了Open Flow消息类型对应的数据结构定义;本次实验的难度相对较合理,只要跟着PPT就可以很高效的完成任务,主要在于细心;

标签:struct,OpenFlow,端口,实践,header,ofp,uint16,实验,net
From: https://www.cnblogs.com/sxlss/p/16736580.html

相关文章

  • 实验3:OpenFlow协议分析实践
    一、实验目的1.能够运用wireshark对OpenFlow协议数据交互过程进行抓包;2.能够借助包解析工具,分析与解释OpenFlow协议的数据包交互过程与机制。二、实验环境Ubuntu......
  • 实验3:OpenFlow协议分析实践
    一、实验目的能够运用wireshark对OpenFlow协议数据交互过程进行抓包;能够借助包解析工具,分析与解释OpenFlow协议的数据包交互过程与机制。二、实验环境Ubuntu20.......
  • 实验3:OpenFlow协议分析实践
    基本要求1、拓扑文件代码(/home/k/032002225/lab3/):frommininet.netimportMininetfrommininet.nodeimportController,RemoteController,OVSControllerfrommini......
  • 实验3:OpenFlow协议分析实践
    基础要求提交导入到/home/用户名/学号/lab3/目录下的拓扑文件wireshark抓包的结果截图和对应的文字说明Hello控制器6633端口(我最高能支持OpenFlow1.0)--->交换机45......
  • 实验3:OpenFlow协议分析实践
    一、基本要求(一)拓补文件(二)抓包截图1.hello2.FeaturesRequest3.SetConig4.Port_Status5.FeaturesReply6.Packet_in7.Flow_mod8.Packet_out(三)交互图......
  • sv实验3
      验证结构    随机约束实验3相比实验2引入了随机约束,主要体现在chnl_trans类中。实验2中的数据内容由generator的get_trans函数产生,而实验3在chnl_trans类......
  • Python第四章实验报告
    一.实验项目名称:《零基础学Python》第四章的14道实例和4道实战二.实验环境:IDLE(Python3.964-bit)三.实验目的和要求:熟练掌握Python序列的应用四.实验过程:实例01输出......
  • 实验3:OpenFlow协议分析实践
    基础要求一、拓扑文件#!/usr/bin/envpythonfrommininet.netimportMininetfrommininet.nodeimportController,RemoteController,OVSControllerfrommininet.......
  • 实验3:OpenFlow协议分析实践
    实验3:OpenFlow协议分析实践一、实验目的1.能够运用wireshark对OpenFlow协议数据交互过程进行抓包;2.能够借助包解析工具,分析与解释OpenFlow协议的数据包交互过程与......
  • 实验3 OpenFlow协议分时实践
    基础实验抓包分析step1:搭建拓扑并配置相应IPstep2:Pingall并抓包step3:分析(1)hello包表示含义:控制器6633端口发送“我最高能支持OpenFlow1.0”信息给交换机43826端......