Layer3 connectivity problem on Hyper-V

Before posting something, READ the changelog, WATCH the videos, howto and provide following:
Your install is: Bare metal, ESXi, what CPU model, RAM, HD, what EVE version you have, output of the uname -a and any other info that might help us faster.

Moderator: mike

Post Reply
nickmaleao
Posts: 3
Joined: Tue Mar 21, 2017 10:26 pm

Layer3 connectivity problem on Hyper-V

Post by nickmaleao » Wed Mar 22, 2017 2:49 pm

Qemu version: 2.4.0
Current API version: 2.0.3-53
Kernel: 4.4.14-eve-ng-ukms+

Eve-ng image mounted on Hyper-V (Win Server 2012)


This year in January i started do get a problem with unl , where when using iou L2 devices/images i couldn't get L3 connectivity with the hosts(docker/linux/vpcs),but everything related do L2(CDP,STP,ARP) works, that was when i decided to try eve, after installing eve , i replicated the same lab that i had in unl with the same running configs, and everything worked.

Yesterday , i went to the lab to do some tests, but before starting the tests i decided to do a update, the update went well, rebooted, the vm started ok, but when i begin the tests, i started to the get the same L3 connectivity problems that i had with unl.

I did a simple test, created a new lab, add a network (cloud1), and attached 2 vpcs to that bridge

48: vunl0_4_0 state UP : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 master pnet1 state forwarding priority 32 cost 100
49: vunl0_3_0 state UP : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 master pnet1 state forwarding priority 32 cost 100

vunl0_3_0 - eth0 vpcs1 - 10.0.20.10/24 00:50:79:66:68:03
vunl0_4_0 - eth0 vpcs2 - 10.0.20.20/24 00:50:79:66:68:04

When i try to ping from vpcs1 to vpcs2 , the arp request/response its visible to both the vpcs, if i do tcpdump on vunl0_3_0 and pnet1 i see the icmp request , but in vunl0_4_0 i dont see the icmp request but i see the arp request and response.

Iptables and ebtables are clear ant the default policy is accept.

33:33:00:00:00:01 dev pnet1 self permanent
01:00:5e:00:00:01 dev pnet1 self permanent
33:33:ff:66:59:ad dev pnet1 self permanent
0a:51:f6:19:5f:a1 dev vunl0_3_0 master pnet1 permanent
0a:51:f6:19:5f:a1 dev vunl0_3_0 vlan 1 master pnet1 permanent
00:50:79:66:68:03 dev vunl0_3_0 master pnet1
82:e7:fb:03:b5:0c dev vunl0_4_0 master pnet1 permanent
82:e7:fb:03:b5:0c dev vunl0_4_0 vlan 1 master pnet1 permanent
00:50:79:66:68:04 dev vunl0_4_0 master pnet1
22:f9:2e:52:c0:1f dev vunl0_5_32 master pnet1 permanent
aa:bb:cc:80:50:00 dev vunl0_5_32 master pnet1
22:f9:2e:52:c0:1f dev vunl0_5_32 vlan 1 master pnet1 permanent
aa:bb:cc:00:50:20 dev vunl0_5_32 master pnet1


It seems that there is a problem with the binding of the interfaces of the devices to the linux bridges.

nickmaleao
Posts: 3
Joined: Tue Mar 21, 2017 10:26 pm

Re: Layer3 connectivity problem

Post by nickmaleao » Wed Mar 22, 2017 11:27 pm

FINALLY! After hours of tests and digging up info, i found the solution.

I needed to modify the kernel setting to "net.bridge.bridge-nf-call-iptables=0"

Found the solution in this book : https://books.google.pt/books?id=L6PxBQ ... -PT&num=19

Another source: http://wiki.libvirt.org/page/Net.bridge ... ysctl.conf

nickmaleao
Posts: 3
Joined: Tue Mar 21, 2017 10:26 pm

Re: Layer3 connectivity problem

Post by nickmaleao » Wed Mar 22, 2017 11:34 pm

Well, after some reading i noticed that the option mentioned earlier passes the traffic to iptables, and after a close look to the default policy of the chains i noticed that the default policy of the FORWARD chain was drop, change it do accept, and now i am able to pass traffic in the bridge with the option net.bridge.bridge-nf-call-iptables enabled.

ramindia
Posts: 409
Joined: Sun Mar 19, 2017 10:27 pm

Re: Layer3 connectivity problem

Post by ramindia » Fri Mar 24, 2017 6:53 pm

nickmaleao wrote:
Wed Mar 22, 2017 11:34 pm
Well, after some reading i noticed that the option mentioned earlier passes the traffic to iptables, and after a close look to the default policy of the chains i noticed that the default policy of the FORWARD chain was drop, change it do accept, and now i am able to pass traffic in the bridge with the option net.bridge.bridge-nf-call-iptables enabled.
If you do not need FW in Linux you can disable iptables by stoping the service

R!

Post Reply