intervmware e1000网卡驱动支持dpdk驱动吗

DPDK helloworld_Linux教程_动态网站制作指南
DPDK helloworld
来源:人气:681
DPDK helloworld环境搭建:手头没有intel的板子,暂时只能用虚拟机代替;虚拟机的CPU数量不能超过真实的机器,因此暂时无法模拟出NUMA的环境;dpdk需要至少两块网卡,eth0和eth1用于dpdk使用,eth2用于跟主机通信;打开虚拟机的配置文件, xxx.vmx,把所有网卡都设置成e1000ethernet0.esent = "TRUE"ethernet0.connectionType = "hostonly"ethernet0.wakeOnPcktRcv = "FALSE"ethernet0.addressType = "static"ethernet0.virtualDev = "e1000"ethernet1.present = "TRUE"ethernet1.connectionType = "hostonly"ethernet1.wakeOnPcktRcv = "FALSE"ethernet1.addressType = "static"ethernet1.virtualDev = "e1000"ethernet2.present = "TRUE"ethernet2.connectionType = "nat"ethernet2.wakeOnPcktRcv = "FALSE"ethernet2.addressType = "static"ethernet2.virtualDev = "e1000"下载dpdkgit clone git://dpdk.org/dpdk设置环境变量export RTE_SDK=/root/dpdkexport RTE_TARGET=i686-default-app-gccexport EXTRA_CFLAGS="-O0 -g"由于是32位机器,设置target为i686;更多target可以参考intel-dpdk-getting-started-guide其中EXTRA_CFLAGS把编译优化去掉,并加上调试信息;编译代码make config T=i686-default-linuxapp-gccmake install T=i686-default-linuxapp-gccmake -C i686-default-linuxapp-gccmake -C examples/helloworld预留hugepage个数,并挂载echo 128 & /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepagesmount -t hugetlbfs nodev /mnt/huge加载uio和igb_uiomodprobe uioinsmod ./build/kmod/igb_uio.ko绑定设备./tools/igb_uio_bind.py --bind=igb_uio eth0./tools/igb_uio_bind.py --bind=igb_uio eth1如果需要解绑定,后面的bus:slot.func各种设备不一样./tools/igb_uio_bind.py --unbind 02:01.0./tools/igb_uio_bind.py --unbind 02:02.0./tools/igb_uio_bind.py --bind=e.0./tools/igb_uio_bind.py --bind=e.0最后就可以运行helloworldroot@bogon:~/dpdk# ./examples/helloworld/build/helloworld -c 0xf -n 2EAL: Cannot read numa node link for lcore 0 - using physical package id insteadEAL: Detected lcore 0 as core 0 on socket 0EAL: Cannot read numa node link for lcore 1 - using physical package id insteadEAL: Detected lcore 1 as core 1 on socket 0EAL: Cannot read numa node link for lcore 2 - using physical package id insteadEAL: Detected lcore 2 as core 0 on socket 1EAL: Cannot read numa node link for lcore 3 - using physical package id insteadEAL: Detected lcore 3 as core 1 on socket 1EAL: Sk lcore 4 (not detected)EAL: Skip lcore 5 (not detected)EAL: Skip lcore 6 (not detected)EAL: Skip lcore 7 (not detected)EAL: Skip lcore 8 (not detected)EAL: Skip lcore 9 (not detected)EAL: Skip lcore 10 (not detected)EAL: Skip lcore 11 (not detected)EAL: Skip lcore 12 (not detected)EAL: Skip lcore 13 (not detected)EAL: Skip lcore 14 (not detected)EAL: Skip lcore 15 (not detected)EAL: Skip lcore 16 (not detected)EAL: Skip lcore 17 (not detected)EAL: Skip lcore 18 (not detected)EAL: Skip lcore 19 (not detected)EAL: Skip lcore 20 (not detected)EAL: Skip lcore 21 (not detected)EAL: Skip lcore 22 (not detected)EAL: Skip lcore 23 (not detected)EAL: Skip lcore 24 (not detected)EAL: Skip lcore 25 (not detected)EAL: Skip lcore 26 (not detected)EAL: Skip lcore 27 (not detected)EAL: Skip lcore 28 (not detected)EAL: Skip lcore 29 (not detected)EAL: Skip lcore 30 (not detected)EAL: Skip lcore 31 (not detected)EAL: Skip lcore 32 (not detected)EAL: Skip lcore 33 (not detected)EAL: Skip lcore 34 (not detected)EAL: Skip lcore 35 (not detected)EAL: Skip lcore 36 (not detected)EAL: Skip lcore 37 (not detected)EAL: Skip lcore 38 (not detected)EAL: Skip lcore 39 (not detected)EAL: Skip lcore 40 (not detected)EAL: Skip lcore 41 (not detected)EAL: Skip lcore 42 (not detected)EAL: Skip lcore 43 (not detected)EAL: Skip lcore 44 (not detected)EAL: Skip lcore 45 (not detected)EAL: Skip lcore 46 (not detected)EAL: Skip lcore 47 (not detected)EAL: Skip lcore 48 (not detected)EAL: Skip lcore 49 (not detected)EAL: Skip lcore 50 (not detected)EAL: Skip lcore 51 (not detected)EAL: Skip lcore 52 (not detected)EAL: Skip lcore 53 (not detected)EAL: Skip lcore 54 (not detected)EAL: Skip lcore 55 (not detected)EAL: Skip lcore 56 (not detected)EAL: Skip lcore 57 (not detected)EAL: Skip lcore 58 (not detected)EAL: Skip lcore 59 (not detected)EAL: Skip lcore 60 (not detected)EAL: Skip lcore 61 (not detected)EAL: Skip lcore 62 (not detected)EAL: Skip lcore 63 (not detected)EAL: Setting up memory...EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0EAL: Ask a virtual area of 0xe800000 bytesEAL: Virtual area found at 0xa8400000 (size = 0xe800000)EAL: Ask a virtual area of 0x200000 bytesEAL: Virtual area found at 0xa8000000 (size = 0x200000)EAL: Ask a virtual area of 0x400000 bytesEAL: Virtual area found at 0xa7a00000 (size = 0x400000)EAL: Ask a virtual area of 0x200000 bytesEAL: Virtual area found at 0xa7600000 (size = 0x200000)EAL: Ask a virtual area of 0xc00000 bytesEAL: Virtual area found at 0xa6800000 (size = 0xc00000)EAL: Ask a virtual area of 0x200000 bytesEAL: Virtual area found at 0xa6400000 (size = 0x200000)EAL: Ask a virtual area of 0x200000 bytesEAL: Virtual area found at 0xa6000000 (size = 0x200000)EAL: Requesting 128 pages of size 2MB from socket 0EAL: TSC frequency is ~2660068 KHzEAL: Master core 0 is ready (tid=b7599800)EAL: Core 1 is ready (tid=a5fffb40)EAL: Core 2 is ready (tid=a57feb40)EAL: Core 3 is ready (tid=a4ffdb40)hello from core 1hello from core 3hello from core 2hello from core 0后面就可以用gdb调试了
优质网站模板他的最新文章
他的热门文章
您举报文章:
举报原因:
原文地址:
原因补充:
(最多只允许输入30个字)sponsored links
intel dpdk 在虚拟机 VMware 中安装部署
声明:此文档只做学习交流使用,请勿用作其他商业用途
author:朝阳_tony
Create Date:
23:38:47 Saturday
Last Change:
22:33:42 Wednesday
转载请注明出处:http://blog.csdn.net/linzhaolover
intel DPDK交流群希望大家加入互相学习,QQ群号:
此文请结合intel dpdk源码去阅读,基于dpdk-1.5.1 版本源码讲解,源码可以去&网页中下载;更多官方文档请访问
假如你没有intel的网卡,没有相应的linux系统,只是想简单的使用了解一下dpdk,那么你可以选择在vmware中部署一套简单的dpdk环境;
1、在vmware中安装配置适合dpdk运行的虚拟机;
1)、虚拟机的配置要求,
vcpu = 2 & 最少两个cpu,因为dpdk是需要绑定core,一个是没办正常运行dpdk的,如你电脑运行,最好多配置几个;
memory=1024 & 也就是1G ,当然越多越好,因为要配置hugepage,还是多分点吧;
系统,我装的是rhel6.1 ,当然你可以选择更高版本,但不能选择低版本,怕不支持;& 这有 RHEL6.3 6.4 6.5的下载地址;
系统的在装好后要更新一下kernel,我目前虚拟机里使用的是 linux-3.3.2,你最好选择3.0 至3.8之间的,这之间的kernel有些人用过,是可以跑起dpdk的;
网卡, 给两个吧,&
vmware装虚拟机系统我在这就不多说了,网上有很多的教程;
2)、添加dpdk支持的网卡
同学们虚拟网卡,大家就不要吝啬了,至少添加两块intel 网卡吧;因为一块会报错误;
dpdk是intel出的,目前似乎只支持intel的网卡,在装好虚拟机好,我们看一下当前虚拟机的网卡是什么样的;用lspci命令查看;
| grep Ethernet
02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
02:05.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)vmware安装虚拟机,默认的网卡是amd的,我们该怎样正确添加intel的网卡呢??????
好吧,先将虚拟机shutdown;
再添加一块网卡,这时别急,先不启动虚拟机;我们还需要去修改一下当前虚拟机的配置文件,
我的配置文件时在E:\Users\adm\Documents\Virtual Machines\Red Hat Enterprise Linux 5\Red Hat 6.vmx&
你在安装虚拟机时,应该选择了其工作目录,自己将鼠标放在VMware左侧栏你创建的虚拟机名字处,就会自动显示它的工作目录的了;
用记事本打开配置文件,然后添加一行
ethernet2.virtualDev = &e1000&由于我的是添加的第3块网卡了,如果从0开始数,刚好是eth2,添加后的样子是
ethernet2.virtualDev = &e1000&
ethernet2.present = &TRUE&
e1000是intel的网卡中的一个千兆网卡;
好了,在重新启动虚拟机,查看一下网卡,多了一个82545em 的网卡,
| grep Ethernet
02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
02:05.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
02:06.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
2、部署dpdk
1)、下载源码
在开启虚拟机后,从dpdk官网下载最新的code
git clone git://dpdk.org/dpdk
2)、设置环境变量
进入dpdk目录;编辑一个环境变量文件,然后source;
export RTE_SDK=`pwd`
#export RTE_TARGET=x86_64-default-linuxapp-gcc
export RTE_TARGET=i686-default-linuxapp-gcc
由于我的是32虚拟机,所以我选择i686,将x86_64那行环境变量注释掉;
我将上面3行放在dpdkrc文件中,然后用source启用这几个环境变量;&
dpkdrc注意,你以后如果从新登陆终端,进入这个目录,都要source一下这个文件,才能正常运行dpdk的程序;
3)、用dpdk的脚本运行dpdk;
运行脚本进行dpdk测试;&
然后再运行脚本
./tools/setup.sh
----------------------------------------------------------
Step 1: Select the DPDK environment to build
----------------------------------------------------------
[1] i686-default-linuxapp-gcc
[2] i686-default-linuxapp-icc
[3] x86_64-default-linuxapp-gcc
[4] x86_64-default-linuxapp-icc
选择 & &1 & & &
我的是32位系统,所以我选择 & &1 & , 采用gcc编译32位源码;如果你是64位虚拟机,请选择 &3 &&
----------------------------------------------------------
Step 2: Setup linuxapp environment
----------------------------------------------------------
[5] Insert IGB UIO module
[6] Insert KNI module
[7] Setup hugepage mappings for non-NUMA systems
[8] Setup hugepage mappings for NUMA systems
[9] Display current Ethernet device settings
[10] Bind Ethernet device to IGB UIO module
编译ok后,
选择 &5 &&
进行igb_uio.ko驱动的安装,这个驱动在编译后是,在i686-default-linuxapp-gcc/kmod/ 目录中;其实在安装igb_uio.ko之前,脚本先安装了uio模块,uio是一种用户态驱动的实现机制,dpdk有些东西时基于uio实现的;有兴趣的可以了解一下uio的驱动使用&
设置hugepage,&
Removing currently reserved hugepages
.echo_tmp: line 2: /sys/devices/system/node/node?/hugepages/hugepages-2048kB/nr_hugepages: 没有那个文件或目录
Unmounting /mnt/huge and removing directory
Input the number of 2MB pages
Example: to have 128MB of hugepages available, enter '64' to
reserve 64 * 2MB pages
Number of pages: 64
Reserving hugepages
Creating /mnt/huge and mounting as hugetlbfs
提示没有nr_hugepage文件,我没有理它,暂且不知道起原因;
有让你输入预留内存大小的 我输入的是 &64 &, & 64 &乘以 2M &可以128M 做个简单的测试够了,
看一下你当前的设备
Network devices using IGB_UIO driver
====================================
Network devices using kernel driver
===================================
.0 '79c970 [PCnet32 LANCE]' if=eth0 drv=pcnet32 unused= *Active*
.0 '79c970 [PCnet32 LANCE]' if=eth1 drv=pcnet32 unused= *Active*
.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio
Other network devices
=====================
我有3块虚拟网卡,只有最后一个是intel的网卡,看他已经提示当前的网卡驱动而是 e1000而没有用igb_uio , 接下来就是让你去绑定它;
进行 网卡bind
Option: 10
Network devices using IGB_UIO driver
====================================
Network devices using kernel driver
===================================
.0 '79c970 [PCnet32 LANCE]' if=eth0 drv=pcnet32 unused= *Active*
.0 '79c970 [PCnet32 LANCE]' if=eth1 drv=pcnet32 unused= *Active*
.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio
Other network devices
=====================
Enter PCI address of device to bind to IGB UIO driver: 02:06.0
让你输入pci的地址, 你只要将.0 &中的,0000冒号后面的几位输入就行了, &如 &02:06.0 & 记得标点也要输入啊,
注意绑定的时候可以能有个错误的提示如下;
Enter PCI address of device to bind to IGB UIO driver: 02:06.0 02:07.0
Routing table indicates that interface .0 is active. Not modifying
is active ,可能是你当前的对应的网卡处于up状态,所以你要执行down命令将其关闭;
例如我的网卡是eth2;
ifconfig eth2 down关闭后再重新执行一下上面的绑定操作;
就看一下当前的网卡状态;
Network devices using IGB_UIO driver
====================================
.0 '82545EM Gigabit Ethernet Controller (Copper)' drv=igb_uio unused=e1000
Network devices using kernel driver
===================================
.0 '79c970 [PCnet32 LANCE]' if=eth0 drv=pcnet32 unused= *Active*
.0 '79c970 [PCnet32 LANCE]' if=eth1 drv=pcnet32 unused= *Active*
Other network devices
=====================
看 &drv=igb_uio &驱动已经帮过去了,现在intel网卡的去的是igb_
测试一下dpdk程序 &
Option: 12
Enter hex bitmask of cores to execute testpmd app on
Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 0x3
由于我的虚拟机只有2个cpu ,所以按照16进制掩码就选择了 0x3 &,回车运行一下试试 ;
再输入 &start发一下包
Interactive-mode selected
Configuring Port 0 (socket -1)
Checking link statuses...
Port 0 Link Up - speed 1000 Mbps - full-duplex
testpmd& start
Warning! Cannot handle an odd number of ports with the current port topology. Configuration must be changed to have an even number of ports, or relaunch application with --port-topology=chained
io packet forwarding - CRC stripping disabled - packets/burst=16
nb forwarding cores=1 - nb forwarding ports=1
RX queues=1 - RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=36 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ flags=0x0
有警告,是什么意思,????????
日22:32:10 星期三
上面这个错误,经由同样学习dpdk 的同学 frank 解决了,是因为我只添加了1块intel的网卡,你在再添加一块就ok了
输入stop 停止;
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0
----------------------
RX-packets: 0
RX-dropped: 0
RX-total: 0
TX-packets: 0
TX-dropped: 0
TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0
RX-dropped: 0
RX-total: 0
TX-packets: 0
TX-dropped: 0
TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
为什么没有数据包呢????????? & & 谁知道,告诉我一下;
日22:38:16 &星期三
上面没有数据的问题解决了,O(∩_∩)O~,原始是我将网卡模式添加时用的NAT模式,给修改成了HOSTONLY模式,哎,但我还是有疑问,只两个模式有什么重要区别吗????
---------------------- Forward statistics for port 0
----------------------
RX-packets: 8890
RX-dropped: 0
RX-total: 8890
TX-packets: 8894
TX-dropped: 0
TX-total: 8894
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1
----------------------
RX-packets: 8895
RX-dropped: 0
RX-total: 8895
TX-packets: 8889
TX-dropped: 0
TX-total: 8889
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 17785
RX-dropped: 0
RX-total: 17785
TX-packets: 17783
TX-dropped: 0
TX-total: 17783
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
虚拟机VMware中安装VMware Tools for Linux(Fedora16)
前提准备 本文档默认你已经在VMware环境下安装了Linux(Fedora16),可参见在虚拟机VMware中安装Linux(Fedora16) 2
安装VMware Tools 安装VMware Tools可以使虚拟机模拟出更好的硬件效果 ...
在虚拟机VMware中安装Linux(Fedora16)
准备条件 虚拟机 VMware6.5 VMware Workstation 6.5 正式版.exe 操作系统 Linux Fedora16 DVD Fedora-16-i386-DVD.iso 2
设置Fedora安装环境 新建虚拟机,打开设置向导.选择Custom选项.
22:41调整虚拟机VMware中安装的linux中的分辨率
xp系统下的虚拟机中安装了linux后,虚拟机VMware中的linux界面太小,是由于
linux分辨率太低(通常为800×600)造成的,可以通过安装VMware Tools来更改分
辨率到来解决这个问题,下面说明安装V ...
xp系统下的虚拟机中安装了linux后,虚拟机VMware中的linux界面太小,是由于 linux分辨率太低(通常为800×600)造成的,可以通过安装VMware Tools来更改分 辨率到来解决这个问题,下面说明安装VMware Tools并配置分辨率的 步骤: 1.启动虚拟机并以root用户登录linux系统 2.按CRTL+ALT释 ...
xp系统下的虚拟机中安装了linux后,虚拟机VMware中的linux界面太小,是由于
linux分辨率太低(通常为800×600)造成的,可以通过安装VMware Tools来更改分
辨率到来解决这个问题,下面说明安装VMware Tools并配置分辨率的
1.启动虚拟 ...他的最新文章
他的热门文章
您举报文章:
举报原因:
原文地址:
原因补充:
(最多只允许输入30个字)intel DPDK虚拟机开发环境配置
时间: 14:10:49
&&&& 阅读:6970
&&&& 评论:
&&&& 收藏:0
标签:&&&&&&&&&&&&&&&&&&&&&&&&&&&DPDK介绍见:
1. 所用系统与软件版本
系统:Ubuntu 12.04.3 LTS 64位,&CentOS Linux release 7.0.1406 64位dpdk: 1.7.0 ()
dpdk 1.7.1 经过试验,发现在这两个系统上都有问题, 运行各示例程序都有以下错误& EAL: Error reading from file descriptor
2. 虚拟机配置
虚拟机软件:VMWare WorkStation&10.0.1 build-1379776CPU: 2个CPU, 每个CPU2个核心内存: 1GB+网卡:intel网卡*2, 用于dpdk试验;另一块网卡用于和宿主系统进行通信
3. Ubuntu 12.04上的配置
需要安装gcc及其他一些小工具等,默认都有了,没有的话运行sudo apt-get install装一下。dkdk的一些脚本用到了python,也装一下。
3.2 通过setup脚本进行配置
首先运行su切换到root权限,root没有开的话使用
su passwd root
来开一下。
dpdk提供了一个方便的配置脚本: &dpdk&/tools/setup.sh,通过它可以方便地配置环境。1) 设置环境变量,这里是linux 64位的配置
export RTE_SDK &dpdk&
export RTE_TARGET=x86_64-native-linuxapp-gcc
2)运行setup.sh,显示如下
------------------------------------------------------------------------------
RTE_SDK exported as /home/hack/dpdk-1.7.0
------------------------------------------------------------------------------
----------------------------------------------------------
Step 1: Select the DPDK environment to build
----------------------------------------------------------
[1] i686-native-linuxapp-gcc
[2] i686-native-linuxapp-icc
[3] x86_64-ivshmem-linuxapp-gcc
[4] x86_64-ivshmem-linuxapp-icc
[5] x86_64-native-bsdapp-gcc
[6] x86_64-native-linuxapp-gcc
[7] x86_64-native-linuxapp-icc
----------------------------------------------------------
Step 2: Setup linuxapp environment
----------------------------------------------------------
[8] Insert IGB UIO module
[9] Insert VFIO module
[10] Insert KNI module
[11] Setup hugepage mappings for non-NUMA systems
[12] Setup hugepage mappings for NUMA systems
[13] Display current Ethernet device settings
[14] Bind Ethernet device to IGB UIO module
[15] Bind Ethernet device to VFIO module
[16] Setup VFIO permissions
----------------------------------------------------------
Step 3: Run test application for linuxapp environment
----------------------------------------------------------
[17] Run test application ($RTE_TARGET/app/test)
[18] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd)
----------------------------------------------------------
Step 4: Other tools
----------------------------------------------------------
[19] List hugepage info from /proc/meminfo
----------------------------------------------------------
Step 5: Uninstall and system cleanup
----------------------------------------------------------
[20] Uninstall all targets
[21] Unbind NICs from IGB UIO driver
[22] Remove IGB UIO module
[23] Remove VFIO module
[24] Remove KNI module
[25] Remove hugepage mappings
[26] Exit Script
选择6, 进行编译
3)选择8, 插入igb_uio模块
4)选择11,配置大页内存(非NUMA),选择后会提示你选择页数,输入64,128什么的即可
Removing currently reserved hugepages
Unmounting /mnt/huge and removing directory
Input the number of 2MB pages
Example: to have 128MB of hugepages available, enter ‘64‘ to
reserve 64 * 2MB pages
Number of pages: 128
选择19,可以确认一下大页内存的配置:
AnonHugePages:
HugePages_Total:
HugePages_Free:
HugePages_Rsvd:
HugePages_Surp:
Hugepagesize:
5)选择14, 绑定dpdk要使用的网卡
Network devices using DPDK-compatible driver
============================================
Network devices using kernel driver
===================================
.0 ‘82545EM Gigabit Ethernet Controller (Copper)‘ if=eth0 drv=e1000 unused=igb_uio *Active*
.0 ‘82545EM Gigabit Ethernet Controller (Copper)‘ if=eth1 drv=e1000 unused=igb_uio
.0 ‘82545EM Gigabit Ethernet Controller (Copper)‘ if=eth2 drv=e1000 unused=igb_uio
Other network devices
=====================
Enter PCI address of device to bind to IGB UIO driver: .0
绑定好后,选择13,可以查看当前的网卡配置:
Network devices using DPDK-compatible driver
============================================
.0 ‘82545EM Gigabit Ethernet Controller (Copper)‘ drv=igb_uio unused=e1000
.0 ‘82545EM Gigabit Ethernet Controller (Copper)‘ drv=igb_uio unused=e1000
Network devices using kernel driver
===================================
.0 ‘82545EM Gigabit Ethernet Controller (Copper)‘ if=eth0 drv=e1000 unused=igb_uio *Active*
Other network devices
=====================
6)选择18, 运行testpmd测试程序
注意,运行这个测试程序,虚拟机最好提供2个网卡用于dpdk。
Enter hex bitmask of cores to execute testpmd app on
Example: to execute app on cores 0 to 7, enter 0xff
bitmask: f
如果没问题,按回车后会出现以下输出:
Launching app
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 0 on socket 0
EAL: Detected lcore 3 as core 1 on socket 0
EAL: Support maximum 64 logical core(s) by configuration.
EAL: Detected 4 lcore(s)
EAL: Setting up memory...
EAL: Ask a virtual area of 0xf000000 bytes
EAL: Virtual area found at 0x7fe (size = 0xf000000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fe827c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fe (size = 0x200000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fe826e00000 (size = 0x800000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fe (size = 0x400000)
EAL: Requesting 128 pages of size 2MB from socket 0
EAL: TSC frequency is ~3292453 KHz
EAL: Master core 0 is ready (tid=37c79800)
EAL: Core 3 is ready (tid=24ffc700)
EAL: Core 2 is ready (tid=257fd700)
EAL: Core 1 is ready (tid=25ffe700)
EAL: PCI device .0 on NUMA socket -1
probe driver: 8086:100f rte_em_pmd
.0 not managed by UIO driver, skipping
EAL: PCI device .0 on NUMA socket -1
probe driver: 8086:100f rte_em_pmd
PCI memory mapped at 0x7fe837c23000
PCI memory mapped at 0x7fe837c13000
EAL: PCI device .0 on NUMA socket -1
probe driver: 8086:100f rte_em_pmd
PCI memory mapped at 0x7fe837bf3000
PCI memory mapped at 0x7fe837be3000
Interactive-mode selected
Configuring Port 0 (socket 0)
Port 0: 00:0C:29:14:50:CE
Configuring Port 1 (socket 0)
Port 1: 00:0C:29:14:50:D8
Checking link statuses...
Port 0 Link Up - speed 1000 Mbps - full-duplex
Port 1 Link Up - speed 1000 Mbps - full-duplex
输入start, 开始包转发
testpmd& start
io packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
RX queues=1 - RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ flags=0x0
输入stop,停止包转发,这时会显示统计信息
testpmd& stop
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0
----------------------
RX-packets: 5544832
RX-dropped: 0
RX-total: 5544832
TX-packets: 5544832
TX-dropped: 0
TX-total: 5544832
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1
----------------------
RX-packets: 5544832
RX-dropped: 0
RX-total: 5544832
TX-packets: 5544832
TX-dropped: 0
TX-total: 5544832
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets:
RX-dropped: 0
TX-packets:
TX-dropped: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
3.3 通过命令配置
最好切换到root权限。
1)编译dpdk
进入dpdk主目录&dpdk&,输入
make install T=x86_64-native-linuxapp-gcc
2)配置大页内存(非NUMA)
echo 128 & /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
mkdir /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
可以用以下命令查看大页内存状态:
cat /proc/meminfo | grep Huge
3)安装igb_uio驱动
modprobe uio
insmod x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
4)绑定网卡
先看一下当前网卡的状态
./tools/dpdk_nic_bind.py --status
Network devices using DPDK-compatible driver
============================================
Network devices using kernel driver
===================================
0000:02:01.0 ‘82545EM Gigabit Ethernet Controller (Copper)‘ if=eth0 drv=e1000 unused=igb_uio *Active*
Other network devices
=====================
0000:02:06.0 ‘82545EM Gigabit Ethernet Controller (Copper)‘ unused=e1000,igb_uio
0000:02:07.0 ‘82545EM Gigabit Ethernet Controller (Copper)‘ unused=e1000,igb_uio
进行绑定:
./tools/dpdk_nic_bind.py -b igb_uio 0000:02:06.0
./tools/dpdk_nic_bind.py -b igb_uio 0000:02:07.0
如果网卡有接口名,如eth1, eth2, 也可以在-b igb_uio后面使用接口名, 而不使用pci地址。
5) 运行testpmd测试程序
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 2 -- -i
6)编译运行其他示例程序
&dpdk&/examples下面有很多示例程序,这些程序在dpdk编译时,没有被编译。这里以编译helloworld为例,首先要设置环境变量:
export RTE_SDK &dpdk&
export RTE_TARGET=x86_64-native-linuxapp-gcc
之后进入&dpdk&/examples/helloworld,运行make,成功会生成build目录,其中有编译好的helloworld程序。
4. CentOS 7.0上的配置
安装CentOS虚拟机时,如果选择minimal安装,还需要安装其下的基本开发工具集(含gcc,python等)
另外,dpdk提供的dpdk_nic_bind.py脚本中会调用到lspci命令,这个默认没有安装,运行以下命令安装(不安装此工具则无法绑定网卡):
yum install pciutils
ifconfig默认也没有安装,如果想用它,应运行:
yum install net-tools
在CentOS上,要绑定给dpdk使用的网卡在绑定前,可能是活动的(active),应将其禁用,否则无法绑定。禁用的一种方式是运行:
ifconfig eno down
eno是接口名,如同eth0一样。
在CentOS上使用setup.sh和通过命令编译和配置dpdk的过程与Ubuntu一样,这里就从略了。
&标签:&&&&&&&&&&&&&&&&&&&&&&&&&&&原文:http://www.cnblogs.com/zzqcn/p/4024205.html
教程昨日排行
&&国之画&&&& &&&&&&
&& &&&&&&&&&&&&&&
鲁ICP备号-4
打开技术之扣,分享程序人生!

我要回帖

更多关于 e1000网卡驱动 的文章

 

随机推荐