Skip to main content

[Humbledown highlights] VLAN tag stripping in Virtualbox (actually, Intel NICs et al.)

This is historical material from my old site, but as I have just bumped into a page that linked to it, I thought I would republish it. 
I have not verified that this material is still accurate.
Feel free to post an update as a comment and I'll publish it.

VLAN tag stripping in Virtualbox (actually, Intel NICs et al.)

Mon Jan 17 21:41:46 NZDT 2011
Short version: When using VirtualBox “Internal Network” adaptors for VLAN environments, don't use the ”Intel PRO/1000” family of adaptors, which are the default for some operating system types. Instead, use the either the Paravirtualised adaptor (which would require your guest to have Xen’s virtio drivers, or “AMD PCNet” family of adaptors.
This is because the “Intel PRO/1000” family of adaptors strip the VLAN tags: this is not a problem specific to Virtualbox: it occurs also in other virtualisation products and also on native systems; although you hear about it more often in virtualisation host because that it is more common to want to expose such a host to a VLAN trunk. There are some Windows registry settings for changing this on Intel’s Windows drivers, but currently there are no published mechanisms for Linux systems (using the e1000 kernel module).
In preparing materials for this years TELE301, I found a recent upgrade of VirtualBox had broken my VLAN lab environment. This post shows the problem, how it was diagnosed, and its solution.


I had configured the physical topology shown in the diagram below, but although C1 could ping R1, R1 could not ping on R2 (or any other similar interaction among the VLANs hosted on Switch 2).
Tcpdump on R2’s eth1 (note: not eth1.10 or similar) was showing the frames received as being untagged. ARP requests would timeout. Thus, either the frames were not being tagged on R1, stripped on transit to R2, stripped by R2, or simply not shown by tcpdump.


In previous versions of VirtualBox, the “AMD PCNet FAST III” was the default adaptor, because it was widely supported. Later versions of VirtualBox added support for a few adaptors in the “Intel PRO/1000” family, for compatibility reasons (see VirtualBox User Manual Section 6: “Virtual Networking”).
Using the “AMD PCNet FAST III” works as expected; the ARP is replied to, the ping -n works, and I see vlan 10 in the tcpdump output on R2’s eth1 interface.
On guest systems with Xen DomU support, the Paravirtualised (virtio) driver ought to be used instead, and it also works fine and should also deliver better performance compared to the emulated PCNet adaptor.
Note: One thing to be aware of: if you attach a packet capture to an adaptor (ie. VBoxManage modifyvm name --nictraceN=1 --nictracefileN=adaptor1.pcap) then you will not see the VLAN tags, although it will be seen when it gets to the destination machine on the internal network… though I do need to double-check that this is still the case when using the paravirtualised adaptor.
In Vyatta, you can verify that you are using a pcnet driven card using the command show interfaces ethernet ethX physical: the "driver" line should show "pcnet32" in the case of the AMD PCNet adaptor. You will get an output similar to an error for the paravirtualised driver, as there is no “physical” adaptor being emulated.
Note that if you still want to use VLANs in a network with Intel adaptors, that is fine, so long as you don't expose a trunk port (one whereby frames are tagged with a VLAN identifier) to the adaptor. If you want to tweak registry settings on Windows; you can do that, but at this time there appears to be no such control surface for Linux hosts.
[Update: 18 Jan 2011] Sasquatch, over on the VirtualBox user forums, points out:
The reason the VLAN tag is stripped is because the Intel adapters support VLAN tagging and needs to be set in the adapter properties. When you don’t provide such tag, the default is used (untagged). If you don’t have the VLAN tab in the adapter properties, you haven’t installed the advanced features of it. Grab the driver from the Intel website and install the full package, that will provide VLAN tagging. Using the .inf will not provide the VLAN tab or other features.
I tried this briefly today, in a Windows 7 guest in Virtualbox, but unfortunately Intel’s drivers detect there is no real Intel adaptor installed in the system. I’ve tried with two of the three available adaptors in Virtualbox, although none of them are really modern from a Intel perspective, so perhaps an older version of Windows might work better. I wonder if the installation program takes any switches…

Other notes and tidbits

  • On Vyatta, show interfaces ethernet eth1 physical reports that the ‘Intel PRO/1000 MT Server (82545EM)’ series is being driven by the ‘e1000’ driver. show version reports the Vyatta version as VC6.1-2010.10.16, with kernel version 2.6.32-1-586-vyatta-virt.
  • sudo ethtool --driver eth1 shows driver version is 7.3.21-k5-NAPI
  • There are two separate issues worthy of mention with respect to the VLAN tag stripping behaviour on Linux: the first is stripping of VLAN tags on incoming frames, when using traffic capture tools; although this should not be a problem under Linux as the driver will automatically disable VLAN tag stripping (in hardware) when the device enters promiscuous mode.
    The other issue, discussed in this document, is the tags being stripped on outgoing packets. The reason is much harder to fathom; perhaps it is due to a feature of the card called “Native VLAN”, which simply means that frames on an adaptors home VLAN are not tagged.
  • This issue is not particular to Vyatta, or even to Linux, but rather seems to be a common issue, particularly with Intel cards but also with others, such as the Marvel Yukon. Other possible interesting points of reference:
    VLAN testing with Cisco Catalyst 4006 not going so well
    (turns out a registry tweak was needed to turn off 'VLAN filtering')
    Network Connectivity — My sniffer is not seeing VLAN, 802.1q, or QoS tagged frames
    (Knowledge-base article from Intel; another registry tweak for “monitor mode”)
    README.txt from Intel for the e1000 driver module
    (note that it mentions that "Native VLANs" are supported in this version... that seems like a feature likely to automatically strip VLAN tags...)
    GSN3 — Turning off VLAN tag stripping on Marvell Yukon NIC cards
    This is not a problem restricted to Intel PRO/1000 cards. For example, Marvell Yukon cards can have a similar problem; the solution (another registry tweak) for that card type is discussed for Windows.
  • When e1000 is used, only outgoing frames are not tagged, tcpdump still shows tagged frames coming in from the network.
  • Even if configured using just vconfig add eth1 40 and ifconfig (ie. no Vyatta tools used to configure it), it still fails to tag egress packets.
  • I tried removing the module and reinserting it with debugging enabled.
    /sbin/modinfo e1000
    sudo modprobe -r e1000
    sudo modprobe e1000 debug=8
    But it didn’t appear to show much at debug level 8 anyway; I didn’t turn it up to maximum debugging (debug level 16).
  • ethtool -d eth1 dumps the registers of the apaptor, and shows a number of sections. Under the ‘CTRL (Device control register)’ section, ‘VLAN mode’ is enabled. Under the ‘RCTL (Receive control register), the ‘VLAN filter’ is disabled. There appears to be no method (besides perhaps writing a kernel module) available for manually tweaking the registers: for debugging it might be useful to disable the ‘VLAN mode’, which the driver automatically enables when a VLAN is added. Without looking at the driver source, I can’t be certain of everything that ‘VLAN mode’ implies, and what could be broken if is turned off.


Popular posts from this blog

ORA-12170: TNS:Connect timeout — resolved

If you're dealing with Oracle clients, you may be familiar with the error message
ERROR ORA-12170: TNS:Connect timed out occurred I was recently asked to investigate such a problem where an application server was having trouble talking to a database server. This issue was blocking progress on a number of projects in our development environment, and our developers' agile post-it note progress note board had a red post-it saying 'Waiting for Cameron', so I thought I should promote it to the front of my rather long list of things I needed to do... it probably also helped that the problem domain was rather interesting to me, and so it ended being a late-night productivity session where I wasn't interrupted and my experimentation wouldn't disrupt others. I think my colleagues are still getting used to seeing email from me at the wee hours of the morning.

This can masquerade as a number of other error strings as well. Here's what you might see in the sqlnet.log f…

Getting MySQL server to run with SSL

I needed to get an old version of MySQL server running with SSL. Thankfully, that support has been there for a long time, although on my previous try I found it rather frustrating and gave it over for some other job that needed doing.

If securing client connections to a database server is a non-negotiable requirement, I would suggest that MySQL is perhaps a poor-fit and other options, such as PostgreSQL -- according to common web-consensus and my interactions with developers would suggest -- should be first considered. While MySQL can do SSL connections, it does so in a rather poor way that leaves much to be desired.

UPDATED 2014-04-28 for MySQL 5.0 (on ancient Debian Etch).

Here is the fast guide to getting SSL on MySQL server. I'm doing this on a Debian 7 ("Wheezy") server. To complete things, I'll test connectivity from a 5.1 client as well as a reasonably up-to-date MySQL Workbench 5.2 CE, plus a Python 2.6 client; just to see what sort of pain awaits.

UPDATE: 2014-0…

From DNS Packet Capture to analysis in Kibana

UPDATE June 2015: Forget this post, just head for the Beats component for ElasticSearch. Beats is based on PacketBeat (the same people). That said, I haven't used it yet.

If you're trying to get analytics on DNS traffic on a busy or potentially overloaded DNS server, then you really don't want to enable query logging. You'd be better off getting data from a traffic capture. If you're capturing this on the DNS server, ensure the capture file doesn't flood the disk or degrade performance overmuch (here I'm capturing it on a separate partition, and running it at a reduced priority).

# nice tcpdump -p -nn -i eth0 -s0 -w /spare/dns.pcap port domain

Great, so now you've got a lot of packets (set's say at least a million, which is a reasonably short capture). Despite being short, that is still a massive pain to work with in Wireshark, and Wireshark is not the best tool for faceting the message stream so you can can look for patterns (eg. to find relationshi…