分类 默认分类 下的文章

SDN 3 作业2-2

  1. https://type.dayiyi.top/index.php/archives/306/
  2. https://cmd.dayi.ink/btoyilxmRNq9VEuZa8_0_Q?view

虚拟机

【测试0.1】Ubuntu_20.04_sdn_ovs_2.17.8-LTS-fix1 ,建议虚拟机内存多分2G,不然容易KILLED

找个IDEA构建(IF U NEED 最新的?)

下一个这个:

https://www.jetbrains.com/zh-cn/idea/download/?section=linux

然后把这个文件塞虚拟机里
推荐Filezilla (查看ip:ip addr

基本环境

# 把俩文件复制到桌面上
su
mv ./Desktop/ideaIC-2023.3.tar.gz ./
mv ./Desktop/floodlight_github_after_git_submodule_update.zip ./
tar -zxvf ideaIC-2023.3.tar.gz
unzip floodlight_github_after_git_submodule_update.zip
chmod -R 777 floodlight
chmod -R 777 floodlight/*
#装点基本的
apt update
apt install git build-essential ant maven python-dev openjfx -y
apt install maven ant -y
apt install openjdk-8-jdk -y

MVN CLEAN:

cd floodlight
mvn clean

开idea

进入idea-C-xxx/bin

# 普通用户即可
./idea.sh

选择文件:

等待同步

选择JDK

  1. 先点构建
  2. 下载JDK

  1. 然后再构建

  1. 如果出错:

进入

  1. 再build

构建JAR包

  1. 点这个

  1. 点这个

  1. 点这个

  1. 点这个

  1. 点这个

先APPLY

  1. 构建:

Rebuild module

  1. BUILD

  1. build

  1. build

  1. 然后这样就成功了(如果失败了尝试多给虚拟机开3G内存)

启动

cd floodlight/
java -jar ./out/artifacts/floodlight_jar/floodlight.jar

然后去网页:

http://localhost:8080/ui/pages/index.html

Docker image

su
apt install docker-compose -y
docker pull latarc/floodlight
docker run -it --rm latarc/floodlight

其实我觉得爽的事情,就是,不用动脑子,挂网.(这个镜像)

这里就Docker image成功了(很遗憾) 就用着这个方法

docker run -it -p 8080:8080 -p 6653:6653 --rm latarc/floodlight

# 8080:8080
# 主机: 容器
#如果需要更多端口或者换端口

OVO


# 另外一个终端
sudo mn --topo single,4 --controller remote,ip=127.0.0.1,port=6653
http://localhost:8080/ui/pages/index.html

安装floodlight

失败了

#失败了
sudo apt-get autoremove openjdk-11-jre-headless -y
sudo apt-get install openjdk-8-jdk -y
mkdir -pv ~/sdn/floodlight
cd ~/sdn/floodlight
git clone https://github.com/floodlight/floodlight.git
cd ~/sdn/floodlight/floodlight
git pull origin master 
git submodule init 
git submodule update 
sudo apt-get install maven python-dev -y
sudo apt-get install build-essential -y
sudo apt install ant -y
ant
sudo mkdir /var/lib/floodlight
sudo chmod 777 /var/lib/floodlight
sudo apt-get install openjfx

Docker build

失败了
docker run -it -v ./floodlight_docker:/app ubuntu:16.04 bash
sed -i 's@//.*archive.ubuntu.com@//mirrors.ustc.edu.cn@g' /etc/apt/sources.list
apt update 
apt install sudo -y
cd /app
sudo apt-get install build-essential ant maven python-dev openjfx -y

apt install openjdk-8-jdk git 
git clone https://github.com/floodlight/floodlight.git
cd floodlight
git submodule init
git submodule update

SDN作业5(我也不知道是多少了) RYU分析脚本

分析两个脚本

初始环境

已经安好ryu。

su
find / | grep simple_switch_13.py #找到你的脚本的位置
# 我这里在/opt/ryu/ryu 这里(在最后一个/ryu/app/simple_switch_13.py的前面)
cd /opt/ryu/ryu
python3 bin/ryu run --verbose --observe-links ryu/app/gui_topology/gui_topology.py ryu/app/simple_switch_13.py

找文件:

然后尝试运行一下即可。

# 另外一个终端
ovs-ctl start 
mn --controller remote --topo tree,depth=3

上面这些能用就行了

Simple Switch 13

启动

# 在ryu/ryu/app启动app

# 找到类似这个样的安装目录(也可以找到你复制到虚拟机里的文件)
cd /opt/ryu/ryu/ryu/app/

ryu-manager simple_switch_13.py --verbose

# 另外一个终端
mn -c && mn --controller=remote --topo tree,depth=3

你可能能用到的代码:

来源:https://www.cnblogs.com/wangxiaotao/p/8645451.html
#!/usr/bin/env python
# -*- coding: utf-8 -*-

import sys

from ryu.cmd import manager


def main():
    #用要调试的脚本的完整路径取代/home/tao/workspace/python/ryu_test/app/simple_switch_lacp_13.py就可以了
    sys.argv.append('/home/sdn/simple_switch_13.py')
    sys.argv.append('--verbose')
    sys.argv.append('--enable-debugger')
    manager.main()

if __name__ == '__main__':
    main()

然后改一下,就可以启用调试了。(记得在另外一个文件打上断点)

写注释:

from ryu.base import app_manager  #导入 Ryu 应用程序管理器,用于管理 SDN 应用程序。
from ryu.controller import ofp_event  #导入 OpenFlow 事件模块,用于处理 OpenFlow 事件。
from ryu.controller.handler import CONFIG_DISPATCHER, MAIN_DISPATCHER  #处理器类型,用于指定事件处理阶段。
from ryu.controller.handler import set_ev_cls  #事件装饰器,用于将事件处理函数与特定事件相关联。
from ryu.ofproto import ofproto_v1_3  # OpenFlow 1.3 协议模块,这是本程序使用的协议版本。
from ryu.lib.packet import packet  #数据包模块,用于解析和构造数据包。
from ryu.lib.packet import ethernet  # 以太网协议模块,用于处理以太网帧。
from ryu.lib.packet import ether_types  #以太类型模块,包含以太网类型字段的定义。

# 简单交换机13,类名,从RyuApp继承的类。
class SimpleSwitch13(app_manager.RyuApp):
    # 版本1.3
    OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
    
    # 先调用父class的初始化,然后自己定义一个mac to port的字典。
    def __init__(self, *args, **kwargs):
        super(SimpleSwitch13, self).__init__(*args, **kwargs)
        self.mac_to_port = {}

    # 修饰器拦截请求,这个地方把CONFIG_DISPATCHER给拦截下来了?然后交换机事件。

    # 当一个交换机与控制器连接并发送其功能(如支持的流表数量、功能等)时,这个事件(ofp_event.EventOFPSwitchFeatures)

    # CONFIG_DISPATCHER  是当控制器与交换机刚建立连接并进行初始配置时的状态。在这个状态下,控制器处理来自交换机的功能信息,并可以设置一些初始的流表项。
    @set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
    def switch_features_handler(self, ev):
        #这里的ev是事件
        datapath = ev.msg.datapath # 从事件中获取datapath对象,代表一个OpenFlow交换机,也通过datapath来联络交换机,交换信息。
        ofproto = datapath.ofproto # 获取该datapath使用的OpenFlow协议版本的定义
        parser = datapath.ofproto_parser # 获取该datapath使用的OpenFlow协议的处理器

        # 这里说之前可能有BUG,但是现在修了。
        # install table-miss flow entry
        #
        # We specify NO BUFFER to max_len of the output action due to
        # OVS bug. At this moment, if we specify a lesser number, e.g.,
        # 128, OVS will send Packet-In with invalid buffer_id and
        # truncated packet data. In that case, we cannot output packets
        # correctly.  The bug has been fixed in OVS v2.1.0.

        match = parser.OFPMatch() #创建一个空的匹配条件,匹配所有的流量
        actions = [parser.OFPActionOutput(ofproto.OFPP_CONTROLLER,ofproto.OFPCML_NO_BUFFER)] #创建一个动作,将匹配到的包发送到控制器,不缓存数据包
        self.add_flow(datapath, 0, match, actions) # 调用add_flow函数添加流表项

    # 添加流表
    def add_flow(self, datapath, priority, match, actions, buffer_id=None):
        ofproto = datapath.ofproto #拿协议
        parser = datapath.ofproto_parser #拿协议parse

        inst = [parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS,actions)] # 创建指令列表,包含actions,然后actions是刚才传过来的,上一个函数生成的那个。
        if buffer_id: #这里我觉得不可能会不是None,至少从handler不会
            mod = parser.OFPFlowMod(datapath=datapath, buffer_id=buffer_id,
                                    priority=priority, match=match,
                                    instructions=inst) #如果指定了buffer_id,创建一个包含它的流表项
        else:
            mod = parser.OFPFlowMod(datapath=datapath, priority=priority,
                                    match=match, instructions=inst) # 创建一个普通的流表项
        datapath.send_msg(mod) # 然后把流表发送给交换机。

    # 另外一个装饰器,这里把Packet-In拿一下。
    # 当交换机接收到一个数据包,并且没有匹配的流表项来告诉它如何处理这个包时,它会将这个数据包发送到控制器。这个事件就是通知控制器有一个这样的数据包到达。
    # MAIN_DISPATCHER:通常在与交换机的初始配置完成后进入这个状态。在这个状态下,控制器处理大部分的数据流,如数据包的进入、状态变化等
    @set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
    def _packet_in_handler(self, ev):
        # If you hit this you might want to increase
        # the "miss_send_length" of your switch

        # 如果接收到的数据包被截断了,也就是不完整?,输出下debug级别的 log。
        if ev.msg.msg_len < ev.msg.total_len:
            self.logger.debug("数据包被截断。packet truncated: only %s of %s bytes",
                              ev.msg.msg_len, ev.msg.total_len)
            
        msg = ev.msg # 从事件中拿msg
        datapath = msg.datapath # 拿datapath
        ofproto = datapath.ofproto # 拿协议
        parser = datapath.ofproto_parser # 拿协议处理器


        in_port = msg.match['in_port'] # 从消息中获取进入端口号
        pkt = packet.Packet(msg.data) # 将接收到的数据转换为Packet对象
        eth = pkt.get_protocols(ethernet.ethernet)[0] #从Packet对象中获取以太网协议的信息

        if eth.ethertype == ether_types.ETH_TYPE_LLDP: #如果是LLDP包,直接忽略。链路层发现协议,用于网络设备之间的相互发现。
            # ignore lldp packet
            return
        dst = eth.dst # 以太帧的源mac
        src = eth.src # 以太帧目的mac

        

        dpid = format(datapath.id, "d").zfill(16) #获取交换机的ID,并转换为16位的字符串
        self.logger.info(f"[{dpid}]当前包的,源MAC:{src} 目的MAC:{dst}")
        self.mac_to_port.setdefault(dpid, {}) # 如果交换机的 MAC 到端口映射不存在(dpid变量),则把dpid初始化为一个空字典。也就是dpid键值如果不存在,则添加 dpid={},如果有了就不弄了。

        self.logger.info("[%s]当前包:packet in dpid:%s src:%s dst:%s in_port:%s",dpid, dpid, src, dst, in_port)

        # learn a mac address to avoid FLOOD next time.
        # 学习 MAC 地址,避免下次洪泛。
        self.mac_to_port[dpid][src] = in_port # 【源MAC地址】=进入端口号 存储

        if dst in self.mac_to_port[dpid]: # 如果dst(目的地址)在【dpid】里面
            out_port = self.mac_to_port[dpid][dst] # 目的地址= dpid[dst] 也就是已经学习了的端口
            self.logger.info("[%s] src:%s , dst:%s 已经学习到",dpid,src,dst)
        else:
            out_port = ofproto.OFPP_FLOOD # 否则,目的地址,出口端口为洪泛端口
            self.logger.info("[%s] src:%s , dst:%s 泛洪:%s",dpid,src,dst,out_port)

        actions = [parser.OFPActionOutput(out_port)] #创建一个动作,将数据包发送到指定端口

        self.logger.info(f"[控制器MAC数据库]{self.mac_to_port}")
        # install a flow to avoid packet_in next time # 安装一个流表项,以避免下次产生Packet-In事件
        if out_port != ofproto.OFPP_FLOOD: # 如果出口不是泛洪口。
            match = parser.OFPMatch(in_port=in_port, eth_dst=dst, eth_src=src) #创建匹配条件,匹配源和目的 MAC 地址以及端口。
            # verify if we have a valid buffer_id, if yes avoid to send both # 检查是否有有效的buffer_id,如果有,则避免同时发送flow_mod和packet_out
            # flow_mod & packet_out
            if msg.buffer_id != ofproto.OFP_NO_BUFFER:  # 如果交换机暂时缓存了这个数据包,而没有立即发送整个包的内容到控制器,它会使用一个非负数的buffer_id来标识这个缓存的包。这里如果不相同的话(OFP_NO_BUFFER一般为-1)当交换机没有缓存这个包,而是直接将整个包发送给控制器时。也就是如果交换机暂时缓存了这个包。
                self.add_flow(datapath, 1, match, actions, msg.buffer_id) # 调用 add_flow 函数安装流表项。在交换机上添加一个新的流表项,这个流表项具有特定的匹配条件和一系列动作,并且可能会立即应用于一个已经被交换机缓存的数据包。(带buffer_id的流表)
                return
            else:
                self.add_flow(datapath, 1, match, actions) # 否则,添加一个普通的流表项
        data = None
        if msg.buffer_id == ofproto.OFP_NO_BUFFER: #如果没有缓存这个包。也就是没有发送到控制器
            data = msg.data # 如果没有缓冲区 ID,则设置数据字段。
        out = parser.OFPPacketOut(datapath=datapath, buffer_id=msg.buffer_id,in_port=in_port, actions=actions, data=data) #创建数据包输出消息。
        datapath.send_msg(out)#发送数据包输出消息到交换机。

wow:

部分信息

方便理解,也许..
  • datapath

  • ofproto

这个看上去是个定义。

  • parser

  • wow

  • 数据库欸:

HUB

这个就简单一些了。


from ryu.base import app_manager # APP管理器
from ryu.ofproto import ofproto_v1_3 # 协议模块1.3
from ryu.controller import ofp_event # 导入 OpenFlow 事件模块,用于处理 OpenFlow 事件。
from ryu.controller.handler import MAIN_DISPATCHER, CONFIG_DISPATCHER # 两个处理时期,一个普通,一个配置
from ryu.controller.handler import set_ev_cls # 事件装饰器

# wa 这是一个HUB
class Hub(app_manager.RyuApp):
    # openflow version # OPENFLOW 协议版本1.3
    OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]

    def __init__(self, *args, **kwargs): #构造函数
        super(Hub, self).__init__(*args, **kwargs) #还是调用父class的构造。
        # 注意这里就没有了那个mac表的定义。


    # 这里同样的还是拿,控制器与交换机刚建立连接并进行初始配置时的状态。
    @set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
    def switch_features_handler(self, ev):
        datapath = ev.msg.datapath # 从事件里拿datapath,也是跟交换机进行交互的路径,一样。
        ofproto = datapath.ofproto # 然后拿协议定义,一样
        ofp_parser = datapath.ofproto_parser #拿协议解释器,一样。

        # install the table-miss flow entry. # 感觉也可以说是添加一个流表,以避免下次产生Packet-In事件
        match = ofp_parser.OFPMatch()  # 创建一个空的匹配条件,表示匹配所有包。
        actions = [ofp_parser.OFPActionOutput(ofproto.OFPP_CONTROLLER,ofproto.OFPCML_NO_BUFFER)] #  创建一个动作列表,包含将数据包发送到控制器的动作。
        # OFPP_CONTROLLER 匹配该流表项的所有数据包将被发送到控制器。
        # OFPCML_NO_BUFFER 控制器的数据包的最大长度,OFPCML_NO_BUFFER一般为-1,没有限制

        self.add_flow(datapath, 0, match, actions) # 添加流表

    def add_flow(self, datapath, priority, match, actions): #添加流表,安装到交换机(datapath)上
        # add a flow entry, and install it into datapath.
        ofproto = datapath.ofproto # 拿协议
        ofp_parser = datapath.ofproto_parser # 拿协议解释器

        # contruct a flow_mod msg and sent it. # 构建流修改消息并发送。
        inst = [ofp_parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS,actions)] # 创建指令列表,包含应用动作的指令。
        mod = ofp_parser.OFPFlowMod(datapath=datapath, priority=priority,match=match, instructions=inst) #创建流修改消息。
        datapath.send_msg(mod) # 发送流修改消息到交换机。

    # 同样,MAIN_DISPATCHER:通常在与交换机的初始配置完成后进入这个状态,并且处理PacketIn
    @set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
    def packet_in_handler(self, ev):
        msg = ev.msg #获得事件msg 一样
        datapath = msg.datapath # 一样,拿datapath
        ofproto = datapath.ofproto # 一样,拿协议
        ofp_parser = datapath.ofproto_parser #一样,拿协议处理器
        in_port = msg.match['in_port'] # 获得数据包端口号
        self.logger.info(f"in port:{in_port}")


        # contruct a flow entry.# 构建流表项。
        match = ofp_parser.OFPMatch() # 创建一个空的匹配条件。
        actions = [ofp_parser.OFPActionOutput(ofproto.OFPP_FLOOD)] # 创建一个动作列表,包含洪泛到所有端口的动作。一样

        # install flow mod to avoid packet_in next time. #安装流表项,避免下次接收到相同数据包。一样
        self.add_flow(datapath, 1, match, actions) # 调用 add_flow 函数安装流表项。一样

        out = ofp_parser.OFPPacketOut(
            datapath=datapath, buffer_id=msg.buffer_id, in_port=in_port,
            actions=actions) #创建数据包输出消息。一样
        datapath.send_msg(out) #发送数据包输出消息到交换机。一样

对比

1. 简单学习交换机(第一个代码)

  • MAC 地址学习:维护一个 MAC 地址到端口号的映射表(mac_to_port) 来学习网络中的设备。收到新的数据包时,会检查这个映射表来决定如何转发数据包。
  • 避免洪泛:如果目的 MAC 地址已经学习过(即在 mac_to_port 表中),交换机会将数据包直接转发到特定的端口,而不是广播到所有端口。
  • 流表生成:当收到一个未知的目的 MAC 地址的数据包时,程序会先洪泛这个包,然后添加一条新的流表项来处理未来从这个源地址到这个目的地址的数据包。为了下次能够直接转发而不是再次洪泛。

2. Hub(第二个代码)

  • 无 MAC 地址学习:不维护任何 MAC 地址到端口的映射。对网络中的设备一无所知,也不学习它们的地址。
  • 总是洪泛:每当 Hub 收到一个数据包,无论它的源或目的地址是什么,它总是将数据包洪泛到所有端口。
  • 流表生成:Hub 的流表项简单地匹配所有进入的数据包并执行洪泛操作。即使它安装了这样的流表项,每个匹配的包都会被发送到所有端口。

流表生成的区别

  • 简单学习交换机中,流表项的生成是动态的,基于网络流量中观察到的源和目的 MAC 地址。
  • Hub,流表项是静态的,简单地匹配所有进入的数据包并执行相同的洪泛操作。没有基于流量内容的动态决策过程。

SDN ryu的安装

虚拟机

Ubuntu_20.04_sdn_ovs-2.17.8-LTS-fix1
注意这里用了fix1也就是【测试0.1】

安装:

su
mkdir -pv /opt/ryu
cd /opt/ryu && apt install python3-pip -y 
git clone https://github.com/osrg/ryu.git
cd ryu && apt install python3-eventlet -y && apt install python3-routes -y && apt install python3-webob -y  && apt install python3-paramiko -y
pip3 install -i https://mirrors.ustc.edu.cn/pypi/web/simple -r tools/pip-requires
python3 setup.py install
ryu-manager# 看看这里有没有报错,我这里蛮正常的

测试1

su
cd /opt/ryu/ryu
ovs-ctl start
python3 bin/ryu run --verbose --observe-links ryu/app/gui_topology/gui_topology.py ryu/app/simple_switch_13.py


#另外一个终端
mn --controller remote --topo tree,depth=3

记得刷新下网页

测试2

nano hub.py

from ryu.base import app_manager
from ryu.ofproto import ofproto_v1_3
from ryu.controller import ofp_event
from ryu.controller.handler import MAIN_DISPATCHER, CONFIG_DISPATCHER
from ryu.controller.handler import set_ev_cls


class Hub(app_manager.RyuApp):
    # openflow version
    OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]

    def __init__(self, *args, **kwargs):
        super(Hub, self).__init__(*args, **kwargs)

    @set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
    def switch_features_handler(self, ev):
        datapath = ev.msg.datapath
        ofproto = datapath.ofproto
        ofp_parser = datapath.ofproto_parser

        # install the table-miss flow entry.
        match = ofp_parser.OFPMatch()
        actions = [ofp_parser.OFPActionOutput(ofproto.OFPP_CONTROLLER,
                                              ofproto.OFPCML_NO_BUFFER)]
        self.add_flow(datapath, 0, match, actions)

    def add_flow(self, datapath, priority, match, actions):
        # add a flow entry, and install it into datapath.
        ofproto = datapath.ofproto
        ofp_parser = datapath.ofproto_parser

        # contruct a flow_mod msg and sent it.
        inst = [ofp_parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS,
                                                 actions)]
        mod = ofp_parser.OFPFlowMod(datapath=datapath, priority=priority,
                                    match=match, instructions=inst)

        datapath.send_msg(mod)

    @set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
    def packet_in_handler(self, ev):
        msg = ev.msg
        datapath = msg.datapath
        ofproto = datapath.ofproto
        ofp_parser = datapath.ofproto_parser
        in_port = msg.match['in_port']

        # contruct a flow entry.
        match = ofp_parser.OFPMatch()
        actions = [ofp_parser.OFPActionOutput(ofproto.OFPP_FLOOD)]

        # install flow mod to avoid packet_in next time.
        self.add_flow(datapath, 1, match, actions)

        out = ofp_parser.OFPPacketOut(
            datapath=datapath, buffer_id=msg.buffer_id, in_port=in_port,
            actions=actions)
        datapath.send_msg(out)

启动:

ryu-manager hub.py --verbose

# 另外一个终端
mn --controller=remote --topo tree,depth=3
mininet> pingall

测试3

nano simple_switch_13.py (nano ctrl+s保存)

# 引入包
from ryu.base import app_manager
from ryu.controller import ofp_event
from ryu.controller.handler import CONFIG_DISPATCHER, MAIN_DISPATCHER
from ryu.controller.handler import set_ev_cls
from ryu.ofproto import ofproto_v1_3
from ryu.lib.packet import packet
from ryu.lib.packet import ethernet
from ryu.lib.packet import ether_types
 
 
class SimpleSwitch13(app_manager.RyuApp):
    # 定义openflow版本
    OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
 
    def __init__(self, *args, **kwargs):
        super(SimpleSwitch13, self).__init__(*args, **kwargs)
        # 定义保存mac地址到端口的一个映射
        self.mac_to_port = {}
 
    # 处理EventOFPSwitchFeatures事件
    @set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
    def switch_features_handler(self, ev):
        datapath = ev.msg.datapath
        ofproto = datapath.ofproto
        parser = datapath.ofproto_parser
 
        # install table-miss flow entry
        #
        # We specify NO BUFFER to max_len of the output action due to
        # OVS bug. At this moment, if we specify a lesser number, e.g.,
        # 128, OVS will send Packet-In with invalid buffer_id and
        # truncated packet data. In that case, we cannot output packets
        # correctly.  The bug has been fixed in OVS v2.1.0.
        match = parser.OFPMatch()
        actions = [parser.OFPActionOutput(ofproto.OFPP_CONTROLLER,
                                          ofproto.OFPCML_NO_BUFFER)]
        self.add_flow(datapath, 0, match, actions)
 
    # 添加流表函数
    def add_flow(self, datapath, priority, match, actions, buffer_id=None):
        # 获取交换机信息
        ofproto = datapath.ofproto
        parser = datapath.ofproto_parser
 
        # 对action进行包装
        inst = [parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS,
                                             actions)]
        # 判断是否有buffer_id,生成mod对象
        if buffer_id:
            mod = parser.OFPFlowMod(datapath=datapath, buffer_id=buffer_id,
                                    priority=priority, match=match,
                                    instructions=inst)
        else:
            mod = parser.OFPFlowMod(datapath=datapath, priority=priority,
                                    match=match, instructions=inst)
        # 发送mod
        datapath.send_msg(mod)
 
    # 处理 packet in 事件
    @set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
    def _packet_in_handler(self, ev):
        # If you hit this you might want to increase
        # the "miss_send_length" of your switch
        if ev.msg.msg_len < ev.msg.total_len:
            self.logger.debug("packet truncated: only %s of %s bytes",
                              ev.msg.msg_len, ev.msg.total_len)
        # 获取包信息,交换机信息,协议等等
        msg = ev.msg
        datapath = msg.datapath
        ofproto = datapath.ofproto
        parser = datapath.ofproto_parser
        in_port = msg.match['in_port']
 
        pkt = packet.Packet(msg.data)
        eth = pkt.get_protocols(ethernet.ethernet)[0]
 
        # 忽略LLDP类型
        if eth.ethertype == ether_types.ETH_TYPE_LLDP:
            # ignore lldp packet
            return
 
        # 获取源端口,目的端口
        dst = eth.dst
        src = eth.src
 
        dpid = format(datapath.id, "d").zfill(16)
        self.mac_to_port.setdefault(dpid, {})
 
        self.logger.info("packet in %s %s %s %s", dpid, src, dst, in_port)
 
        # 学习包的源地址,和交换机上的入端口绑定
        # learn a mac address to avoid FLOOD next time.
        self.mac_to_port[dpid][src] = in_port
 
        # 查看是否已经学习过该目的mac地址
        if dst in self.mac_to_port[dpid]:
            out_port = self.mac_to_port[dpid][dst]
        # 否则进行洪泛
        else:
            out_port = ofproto.OFPP_FLOOD
 
        actions = [parser.OFPActionOutput(out_port)]
 
        # 下发流表处理后续包,不再触发 packet in 事件
        # install a flow to avoid packet_in next time
        if out_port != ofproto.OFPP_FLOOD:
            match = parser.OFPMatch(in_port=in_port, eth_dst=dst, eth_src=src)
            # verify if we have a valid buffer_id, if yes avoid to send both
            # flow_mod & packet_out
            if msg.buffer_id != ofproto.OFP_NO_BUFFER:
                self.add_flow(datapath, 1, match, actions, msg.buffer_id)
                return
            else:
                self.add_flow(datapath, 1, match, actions)
        data = None
        if msg.buffer_id == ofproto.OFP_NO_BUFFER:
            data = msg.data
 
        out = parser.OFPPacketOut(datapath=datapath, buffer_id=msg.buffer_id,
                                  in_port=in_port, actions=actions, data=data)
        # 发送流表
        datapath.send_msg(out)
  • 终端1
ryu-manager simple_switch_13.py --verbose
  • 终端2
mn --controller=remote --topo tree,depth=3
> pingall

WOW

SDN作业2 Open vSwitch 应用实践1

本文地址:

  1. https://type.dayiyi.top/index.php/archives/292/
  2. https://blog.dayi.ink/?p=162
  3. https://www.cnblogs.com/rabbit-dayi/p/17868059.html
  4. https://cmd.dayi.ink/ltQtj5aOSi-x2Xc6xOuHGg

本实验均在root用户下进行

如果失败了,可能是OVS的问题,请参考文章末尾

su
sudo su

预制

启动OVS服务

ovs-ctl start

然后想法开两个shell

任务1

先清空下

终端1:

mn -c
mn

终端2(另外开一个终端):

ovs-ofctl dump-flows s1

没有流表:

PingALL

#CLI 终端1
mininet> pingall
*** Ping: testing ping reachability
h1 -> h2
h2 -> h1
*** Results: 0% dropped (2/2 received)
mininet>

查看流表:

#终端2
ovs-ofctl dump-flows s1

建立拓扑

# 终端1
# 清理并且建立 1个交换机4个主机
mn -c && mn --topo single,4

查看流表

# 终端2
# 查看流表
ovs-ofctl dump-flows s1 #这里应该是空的

<!-- ### 设置IP

更新内容
mininet> h1 ifconfig h1-eth0 10.0.0.1
mininet> h2 ifconfig h2-eth0 10.0.0.2
mininet> h3 ifconfig h3-eth0 10.0.0.3
mininet> h4 ifconfig h4-eth0 10.0.0.4

PINGALL

#终端1
mininet> pingall
*** Ping: testing ping reachability
h1 -> h2 h3 h4
h2 -> h1 h3 h4
h3 -> h1 h2 h4
h4 -> h1 h2 h3
*** Results: 0% dropped (12/12 received)
mininet>

这里全部可以PING通

查看目前的流表:

#终端2
root@ubuntu:/home/sdn# ovs-ofctl dump-flows s1
 cookie=0x0, duration=55.843s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=22:87:c2:36:ab:33,dl_dst=ca:76:64:1b:37:77,arp_spa=10.0.0.2,arp_tpa=10.0.0.1,arp_op=2 actions=output:"s1-eth1"
 cookie=0x0, duration=55.841s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=ca:76:64:1b:37:77,arp_spa=10.0.0.3,arp_tpa=10.0.0.1,arp_op=2 actions=output:"s1-eth1"
 cookie=0x0, duration=55.839s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=ca:76:64:1b:37:77,arp_spa=10.0.0.4,arp_tpa=10.0.0.1,arp_op=2 actions=output:"s1-eth1"
 cookie=0x0, duration=55.836s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=22:87:c2:36:ab:33,arp_spa=10.0.0.3,arp_tpa=10.0.0.2,arp_op=2 actions=output:"s1-eth2"
 cookie=0x0, duration=55.833s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=22:87:c2:36:ab:33,arp_spa=10.0.0.4,arp_tpa=10.0.0.2,arp_op=2 actions=output:"s1-eth2"
 cookie=0x0, duration=55.829s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=12:18:0f:59:6d:59,arp_spa=10.0.0.4,arp_tpa=10.0.0.3,arp_op=2 actions=output:"s1-eth3"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=12:18:0f:59:6d:59,arp_spa=10.0.0.4,arp_tpa=10.0.0.3,arp_op=1 actions=output:"s1-eth3"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=22:87:c2:36:ab:33,arp_spa=10.0.0.3,arp_tpa=10.0.0.2,arp_op=1 actions=output:"s1-eth2"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=22:87:c2:36:ab:33,dl_dst=ca:76:64:1b:37:77,arp_spa=10.0.0.2,arp_tpa=10.0.0.1,arp_op=1 actions=output:"s1-eth1"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=22:87:c2:36:ab:33,arp_spa=10.0.0.4,arp_tpa=10.0.0.2,arp_op=1 actions=output:"s1-eth2"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=ca:76:64:1b:37:77,arp_spa=10.0.0.3,arp_tpa=10.0.0.1,arp_op=1 actions=output:"s1-eth1"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=ca:76:64:1b:37:77,arp_spa=10.0.0.4,arp_tpa=10.0.0.1,arp_op=1 actions=output:"s1-eth1"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=c2:4d:1b:f3:02:51,arp_spa=10.0.0.3,arp_tpa=10.0.0.4,arp_op=2 actions=output:"s1-eth4"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=22:87:c2:36:ab:33,dl_dst=12:18:0f:59:6d:59,arp_spa=10.0.0.2,arp_tpa=10.0.0.3,arp_op=2 actions=output:"s1-eth3"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth1",vlan_tci=0x0000,dl_src=ca:76:64:1b:37:77,dl_dst=22:87:c2:36:ab:33,arp_spa=10.0.0.1,arp_tpa=10.0.0.2,arp_op=2 actions=output:"s1-eth2"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth1",vlan_tci=0x0000,dl_src=ca:76:64:1b:37:77,dl_dst=12:18:0f:59:6d:59,arp_spa=10.0.0.1,arp_tpa=10.0.0.3,arp_op=2 actions=output:"s1-eth3"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth1",vlan_tci=0x0000,dl_src=ca:76:64:1b:37:77,dl_dst=c2:4d:1b:f3:02:51,arp_spa=10.0.0.1,arp_tpa=10.0.0.4,arp_op=2 actions=output:"s1-eth4"
 cookie=0x0, duration=50.806s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,arp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=22:87:c2:36:ab:33,dl_dst=c2:4d:1b:f3:02:51,arp_spa=10.0.0.2,arp_tpa=10.0.0.4,arp_op=2 actions=output:"s1-eth4"
 cookie=0x0, duration=55.843s, table=0, n_packets=3, n_bytes=294, idle_timeout=60, priority=65535,icmp,in_port="s1-eth1",vlan_tci=0x0000,dl_src=ca:76:64:1b:37:77,dl_dst=22:87:c2:36:ab:33,nw_src=10.0.0.1,nw_dst=10.0.0.2,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth2"
 cookie=0x0, duration=55.843s, table=0, n_packets=1, n_bytes=98, idle_timeout=60, priority=65535,icmp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=22:87:c2:36:ab:33,dl_dst=ca:76:64:1b:37:77,nw_src=10.0.0.2,nw_dst=10.0.0.1,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth1"
 cookie=0x0, duration=55.841s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth1",vlan_tci=0x0000,dl_src=ca:76:64:1b:37:77,dl_dst=12:18:0f:59:6d:59,nw_src=10.0.0.1,nw_dst=10.0.0.3,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth3"
 cookie=0x0, duration=55.841s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=ca:76:64:1b:37:77,nw_src=10.0.0.3,nw_dst=10.0.0.1,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth1"
 cookie=0x0, duration=55.839s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth1",vlan_tci=0x0000,dl_src=ca:76:64:1b:37:77,dl_dst=c2:4d:1b:f3:02:51,nw_src=10.0.0.1,nw_dst=10.0.0.4,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth4"
 cookie=0x0, duration=55.839s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=ca:76:64:1b:37:77,nw_src=10.0.0.4,nw_dst=10.0.0.1,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth1"
 cookie=0x0, duration=55.838s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=22:87:c2:36:ab:33,dl_dst=ca:76:64:1b:37:77,nw_src=10.0.0.2,nw_dst=10.0.0.1,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth1"
 cookie=0x0, duration=55.838s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth1",vlan_tci=0x0000,dl_src=ca:76:64:1b:37:77,dl_dst=22:87:c2:36:ab:33,nw_src=10.0.0.1,nw_dst=10.0.0.2,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth2"
 cookie=0x0, duration=55.835s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=22:87:c2:36:ab:33,dl_dst=12:18:0f:59:6d:59,nw_src=10.0.0.2,nw_dst=10.0.0.3,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth3"
 cookie=0x0, duration=55.835s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=22:87:c2:36:ab:33,nw_src=10.0.0.3,nw_dst=10.0.0.2,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth2"
 cookie=0x0, duration=55.833s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=22:87:c2:36:ab:33,dl_dst=c2:4d:1b:f3:02:51,nw_src=10.0.0.2,nw_dst=10.0.0.4,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth4"
 cookie=0x0, duration=55.833s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=22:87:c2:36:ab:33,nw_src=10.0.0.4,nw_dst=10.0.0.2,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth2"
 cookie=0x0, duration=55.832s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=ca:76:64:1b:37:77,nw_src=10.0.0.3,nw_dst=10.0.0.1,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth1"
 cookie=0x0, duration=55.831s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth1",vlan_tci=0x0000,dl_src=ca:76:64:1b:37:77,dl_dst=12:18:0f:59:6d:59,nw_src=10.0.0.1,nw_dst=10.0.0.3,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth3"
 cookie=0x0, duration=55.830s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=22:87:c2:36:ab:33,nw_src=10.0.0.3,nw_dst=10.0.0.2,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth2"
 cookie=0x0, duration=55.830s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=22:87:c2:36:ab:33,dl_dst=12:18:0f:59:6d:59,nw_src=10.0.0.2,nw_dst=10.0.0.3,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth3"
 cookie=0x0, duration=55.829s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=c2:4d:1b:f3:02:51,nw_src=10.0.0.3,nw_dst=10.0.0.4,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth4"
 cookie=0x0, duration=55.828s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=12:18:0f:59:6d:59,nw_src=10.0.0.4,nw_dst=10.0.0.3,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth3"
 cookie=0x0, duration=55.827s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=ca:76:64:1b:37:77,nw_src=10.0.0.4,nw_dst=10.0.0.1,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth1"
 cookie=0x0, duration=55.827s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth1",vlan_tci=0x0000,dl_src=ca:76:64:1b:37:77,dl_dst=c2:4d:1b:f3:02:51,nw_src=10.0.0.1,nw_dst=10.0.0.4,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth4"
 cookie=0x0, duration=55.826s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=22:87:c2:36:ab:33,nw_src=10.0.0.4,nw_dst=10.0.0.2,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth2"
 cookie=0x0, duration=55.826s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth2",vlan_tci=0x0000,dl_src=22:87:c2:36:ab:33,dl_dst=c2:4d:1b:f3:02:51,nw_src=10.0.0.2,nw_dst=10.0.0.4,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth4"
 cookie=0x0, duration=55.825s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth4",vlan_tci=0x0000,dl_src=c2:4d:1b:f3:02:51,dl_dst=12:18:0f:59:6d:59,nw_src=10.0.0.4,nw_dst=10.0.0.3,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:"s1-eth3"
 cookie=0x0, duration=55.825s, table=0, n_packets=0, n_bytes=0, idle_timeout=60, priority=65535,icmp,in_port="s1-eth3",vlan_tci=0x0000,dl_src=12:18:0f:59:6d:59,dl_dst=c2:4d:1b:f3:02:51,nw_src=10.0.0.3,nw_dst=10.0.0.4,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:"s1-eth4"
root@ubuntu:/home/sdn#

修改流表

阻止H1访问其他的主机

# 终端1
# 重新开一下 CTRL+D
mn -c && mn --topo single,4


# 终端2
ovs-ofctl add-flow s1 icmp,nw_src=10.0.0.1,icmp_type=8,action=drop

测试:

#终端1
mininet> pingall
*** Ping: testing ping reachability
h1 -> X X X
h2 -> h1 h3 h4
h3 -> h1 h2 h4
h4 -> h1 h2 h3
*** Results: 25% dropped (9/12 received)
mininet>

失败了?

把内核降到了5.8,然后又重新装了下OVS 2.17,记得删除内核模块哦。

大体就这样,挺麻烦的。

虚拟机放这里了,这样省脑子:

这个是我自己用的版本,没经过测试,可能多少有点不一样,OVS要手动启动哦。

Ubuntu_20.04_sdn_ovs_2.17.8-LTS-fix1

链接:https://pan.baidu.com/s/1fwvV2B_eH6D3xEQ2bYJnlQ?pwd=6y8l
提取码:6y8l
--来自百度网盘超级会员V6的分享

SDN 作业1

任务:要求1、将代码上传作业 2、用python3运行代码,使用cli实现pingall、nodes、net、dump iperf h1 h2等命令

更好阅读:

可能仍然有BUG,请反馈。

打开虚拟机

我这里打开的是Ubuntu_20.04_sdn_ovs-2.17.8-LTS

这个虚拟机。具体的文件在群文件中

打开miniedit.py

sudo python2 /opt/sdn/mininet/examples/miniedit.py

画图

照着图画一个,然后连起来即可。

设置IP

右键这个主机,然后选properties

分别设置IP地址即可。

设置链路速率

右键这个线,然后选properties

然后参数这样设置:

100,5ms

设置,进入CLI模式

Edit->Preferences->Start CLI

CLI:

保存文件

这里记得存两份,保存出的python文件是不能再变成图片的!

保存mn

File->Save

导出python文件

导出Level 2 Script

另外,这里导出到过程不应该出现报错

这里不应该有python报错。

运行测试

进入导出文件的目录,我这里是/home/sdn

运行:

python3 ovo.py

测试 PING(4,5交换机未连接控制器)

# 在mn的CLI下
net.pingall()

pingall()

因为那个没连接控制器,所以会ping不通,只有部分可以通。

测试PING(4/5连接控制器)

mininet> pingall
*** Ping: testing ping reachability
h6 -> h8 h1 h5 h4 h2 h3 h7
h8 -> h6 h1 h5 h4 h2 h3 h7
h1 -> h6 h8 h5 h4 h2 h3 h7
h5 -> h6 h8 h1 h4 h2 h3 h7
h4 -> h6 h8 h1 h5 h2 h3 h7
h2 -> h6 h8 h1 h5 h4 h3 h7
h3 -> h6 h8 h1 h5 h4 h2 h7
h7 -> h6 h8 h1 h5 h4 h2 h3
*** Results: 0% dropped (56/56 received)
mininet>

测试延迟:

20ms(去5ms,回来5ms)

mininet> h1 ping h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=46.7 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=23.4 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=21.9 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=22.5 ms
64 bytes from 10.0.0.2: icmp_seq=5 ttl=64 time=23.1 ms
64 bytes from 10.0.0.2: icmp_seq=6 ttl=64 time=22.9 ms
64 bytes from 10.0.0.2: icmp_seq=7 ttl=64 time=23.8 ms
^C
--- 10.0.0.2 ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 6007ms
rtt min/avg/max/mdev = 21.987/26.393/46.792/8.345 ms
mininet>

nodes

nodes

结果:

mininet> nodes
available nodes are:
c0 h1 h2 h3 h4 h5 h6 h7 h8 s1 s2 s3 s4 s5
mininet>

net

net

mininet> net
h7 h7-eth0:s3-eth2
h3 h3-eth0:s4-eth2
h4 h4-eth0:s4-eth3
h5 h5-eth0:s5-eth2
h1 h1-eth0:s1-eth1
h8 h8-eth0:s3-eth3
h6 h6-eth0:s5-eth3
h2 h2-eth0:s1-eth2
s2 lo:  s2-eth1:s1-eth3 s2-eth2:s3-eth1 s2-eth3:s4-eth1 s2-eth4:s5-eth1
s4 lo:  s4-eth1:s2-eth3 s4-eth2:h3-eth0 s4-eth3:h4-eth0
s1 lo:  s1-eth1:h1-eth0 s1-eth2:h2-eth0 s1-eth3:s2-eth1
s3 lo:  s3-eth1:s2-eth2 s3-eth2:h7-eth0 s3-eth3:h8-eth0
s5 lo:  s5-eth1:s2-eth4 s5-eth2:h5-eth0 s5-eth3:h6-eth0
c0
mininet>

dump

dump

mininet> dump
<Host h7: h7-eth0:10.0.0.7 pid=6926>
<Host h3: h3-eth0:10.0.0.3 pid=6928>
<Host h4: h4-eth0:10.0.0.4 pid=6930>
<Host h5: h5-eth0:10.0.0.5 pid=6932>
<Host h1: h1-eth0:10.0.0.1 pid=6934>
<Host h8: h8-eth0:10.0.0.8 pid=6936>
<Host h6: h6-eth0:10.0.0.6 pid=6938>
<Host h2: h2-eth0:10.0.0.2 pid=6940>
<OVSSwitch s2: lo:127.0.0.1,s2-eth1:None,s2-eth2:None,s2-eth3:None,s2-eth4:None pid=6909>
<OVSSwitch s4: lo:127.0.0.1,s4-eth1:None,s4-eth2:None,s4-eth3:None pid=6912>
<OVSSwitch s1: lo:127.0.0.1,s1-eth1:None,s1-eth2:None,s1-eth3:None pid=6915>
<OVSSwitch s3: lo:127.0.0.1,s3-eth1:None,s3-eth2:None,s3-eth3:None pid=6918>
<OVSSwitch s5: lo:127.0.0.1,s5-eth1:None,s5-eth2:None,s5-eth3:None pid=6921>
<Controller c0: 127.0.0.1:6633 pid=6899>
mininet>

iperf测速

iperf h1 h2

*** Starting CLI:
mininet> iperf h1 h2
*** Iperf: testing TCP bandwidth between h1 and h2
*** Results: ['78.1 Mbits/sec', '79.7 Mbits/sec']
mininet> iperf h1 h2
*** Iperf: testing TCP bandwidth between h1 and h2
*** Results: ['81.0 Mbits/sec', '96.0 Mbits/sec']
mininet>
mininet> iperf h1 h2
*** Iperf: testing TCP bandwidth between h1 and h2
*** Results: ['74.6 Mbits/sec', '87.2 Mbits/sec']
mininet>
mininet> iperf h1 h2
*** Iperf: testing TCP bandwidth between h1 and h2
*** Results: ['78.1 Mbits/sec', '94.0 Mbits/sec']
mininet> iperfudp h1 h2
invalid number of args: iperfudp bw src dst
bw examples: 10M
mininet> iperfudp 100M h1 h2
*** Iperf: testing UDP bandwidth between h1 and h2
*** Results: ['100M', '23.7 Mbits/sec', '23.7 Mbits/sec']
mininet>

调试过程

  • 发现pingall有不通的

做了好久好久,换了内核,重装了好多查了好久,发现必须要连接那个controller才可以。

代码

from mininet.net import Mininet
from mininet.node import Controller, RemoteController, OVSController
from mininet.node import CPULimitedHost, Host, Node
from mininet.node import OVSKernelSwitch, UserSwitch
from mininet.node import IVSSwitch
from mininet.cli import CLI
from mininet.log import setLogLevel, info
from mininet.link import TCLink, Intf
from subprocess import call

def myNetwork():

    net = Mininet( topo=None,
                   build=False,
                   ipBase='10.0.0.0/8')

    info( '*** Adding controller\n' )
    c0=net.addController(name='c0',
                      controller=Controller,
                      protocol='tcp',
                      port=6633)

    info( '*** Add switches\n')
    s4 = net.addSwitch('s4', cls=OVSKernelSwitch)
    s3 = net.addSwitch('s3', cls=OVSKernelSwitch)
    s1 = net.addSwitch('s1', cls=OVSKernelSwitch)
    s2 = net.addSwitch('s2', cls=OVSKernelSwitch)
    s5 = net.addSwitch('s5', cls=OVSKernelSwitch)

    info( '*** Add hosts\n')
    h3 = net.addHost('h3', cls=Host, ip='10.0.0.3/24', defaultRoute=None)
    h2 = net.addHost('h2', cls=Host, ip='10.0.0.2/24', defaultRoute=None)
    h4 = net.addHost('h4', cls=Host, ip='10.0.0.4/24', defaultRoute=None)
    h5 = net.addHost('h5', cls=Host, ip='10.0.0.5/24', defaultRoute=None)
    h1 = net.addHost('h1', cls=Host, ip='10.0.0.1/24', defaultRoute=None)
    h7 = net.addHost('h7', cls=Host, ip='10.0.0.7/24', defaultRoute=None)
    h8 = net.addHost('h8', cls=Host, ip='10.0.0.8/24', defaultRoute=None)
    h6 = net.addHost('h6', cls=Host, ip='10.0.0.6/24', defaultRoute=None)

    info( '*** Add links\n')
    s1h1 = {'bw':100,'delay':'5ms'}
    net.addLink(s1, h1, cls=TCLink , **s1h1)
    s1h2 = {'bw':100,'delay':'5ms'}
    net.addLink(s1, h2, cls=TCLink , **s1h2)
    s1s2 = {'bw':100,'delay':'5ms'}
    net.addLink(s1, s2, cls=TCLink , **s1s2)
    s2s3 = {'bw':100,'delay':'5ms'}
    net.addLink(s2, s3, cls=TCLink , **s2s3)
    s2s4 = {'bw':100,'delay':'5ms'}
    net.addLink(s2, s4, cls=TCLink , **s2s4)
    s2s5 = {'bw':100,'delay':'5ms'}
    net.addLink(s2, s5, cls=TCLink , **s2s5)
    s3h7 = {'bw':100,'delay':'5ms'}
    net.addLink(s3, h7, cls=TCLink , **s3h7)
    s3h8 = {'bw':100,'delay':'5ms'}
    net.addLink(s3, h8, cls=TCLink , **s3h8)
    s4h3 = {'bw':100,'delay':'5ms'}
    net.addLink(s4, h3, cls=TCLink , **s4h3)
    s4h4 = {'bw':100,'delay':'5ms'}
    net.addLink(s4, h4, cls=TCLink , **s4h4)
    s5h5 = {'bw':100,'delay':'5ms'}
    net.addLink(s5, h5, cls=TCLink , **s5h5)
    s5h6 = {'bw':100,'delay':'5ms'}
    net.addLink(s5, h6, cls=TCLink , **s5h6)

    info( '*** Starting network\n')
    net.build()
    info( '*** Starting controllers\n')
    for controller in net.controllers:
        controller.start()

    info( '*** Starting switches\n')
    net.get('s4').start([c0])
    net.get('s3').start([c0])
    net.get('s1').start([c0])
    net.get('s2').start([c0])
    net.get('s5').start([c0])

    info( '*** Post configure switches and hosts\n')

    CLI(net)
    net.stop()

if __name__ == '__main__':
    setLogLevel( 'info' )
    myNetwork()
root@ubuntu:/home/sdn# python3 wk3.py
*** Adding controller
*** Add switches
*** Add hosts
*** Add links
(100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) *** Starting network
*** Configuring hosts
h3 h2 h4 h5 h1 h7 h8 h6
*** Starting controllers
*** Starting switches
(100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) (100.00Mbit 5ms delay) *** Post configure switches and hosts
*** Starting CLI:
mininet> pingall
*** Ping: testing ping reachability
h3 -> h2 h4 h5 h1 h7 h8 h6
h2 -> h3 h4 h5 h1 h7 h8 h6
h4 -> h3 h2 h5 h1 h7 h8 h6
h5 -> h3 h2 h4 h1 h7 h8 h6
h1 -> h3 h2 h4 h5 h7 h8 h6
h7 -> h3 h2 h4 h5 h1 h8 h6
h8 -> h3 h2 h4 h5 h1 h7 h6
h6 -> h3 h2 h4 h5 h1 h7 h8
*** Results: 0% dropped (56/56 received)