IETF doc seeks reliable vSwitch benchmark

Once switches become just another function to spawn, you'll need to know how they'll fare

If you fancy wrapping your mind around the complexities that make virtual switches (vSwitches) hard to benchmark, an IETF informational RFC is worth a read.

Put together by Maryam Tahhan and Billy O'Mahony of Intel (note the usual disclaimer that RFCs are the work of individuals not employers) and Al Morton of AT&T, RFC 8204 is designed to help test labs get repeatable, reliable vSwitch benchmarks.

It's part of the ongoing work of the OpenNFV VSperf (vSwitch Performance) project group, which wants to avoid the kind of misunderstanding that happens when people set up tests that suit themselves.

The world of the vSwitch needs benchmarking, if for no other reason than for data centre designers to work out how much punch servers will need to match a physical switch, which is going to be important if switches become just another virtual service to be spun up as needed.

As the RFC notes: “A key configuration aspect for vSwitches is the number of parallel CPU cores required to achieve comparable performance with a given physical device or whether some limit of scale will be reached before the vSwitch can achieve the comparable performance level.”

And that's a knotty problem, because a vSwitch has available to it all of the knobs, buttons, levers and tweaks a sysadmin can apply to the server that hosts it.

Moreover, as they note, benchmarks have to be repeatable. Since the switches will likely run as VMs on commodity servers, there's a bunch of configuration parameters tests should capture that nobody bothers with when they're testing how fast a 40 Gbps Ethernet can pass packets port-to-port.

The current kitchen-sink list includes (for hardware) BIOS data, power management, CPU microcode level, the number of cores enabled and how many of those were used in a test, memory type and size, DIMM configurations, various network interface card details, and PCI configuration.

There's an even-longer list of software details: think “everything from the bootloader up”.

Not to mention that in the virtualized world, we're supposed to assume that a host runs lots of processes – even if that would take a benchmark standard down one rabbit hole too many.

“It's unlikely that the virtual switch will be the only application running on the system under test (SUT)”, the RFC notes, “so CPU utilization, cache utilization, and memory footprint should also be recorded for the virtual implementations of internetworking functions.

“However, internally measured metrics such as these are not benchmarks; they may be useful for the audience … to know and may also be useful if there is a problem encountered during testing.”

VSperf is documented here. ?


Biting the hand that feeds IT ? 1998–2017

<acronym id="haujiCA"><optgroup id="haujiCA"></optgroup></acronym>
<tr id="haujiCA"><optgroup id="haujiCA"></optgroup></tr>
<rt id="haujiCA"></rt>
<rt id="haujiCA"></rt>
<tr id="haujiCA"><optgroup id="haujiCA"></optgroup></tr>
<rt id="haujiCA"></rt>
<tr id="haujiCA"><optgroup id="haujiCA"></optgroup></tr>
<tr id="haujiCA"></tr>
<acronym id="haujiCA"><optgroup id="haujiCA"></optgroup></acronym><acronym id="haujiCA"><optgroup id="haujiCA"></optgroup></acronym>
  • 7844621339 2018-02-21
  • 9607131338 2018-02-21
  • 3095441337 2018-02-21
  • 9602111336 2018-02-21
  • 5723751335 2018-02-21
  • 1275371334 2018-02-21
  • 8517591333 2018-02-21
  • 230661332 2018-02-21
  • 3311101331 2018-02-21
  • 6181321330 2018-02-20
  • 6139401329 2018-02-20
  • 8915101328 2018-02-20
  • 6288241327 2018-02-20
  • 2044301326 2018-02-20
  • 1229571325 2018-02-20
  • 2162361324 2018-02-20
  • 7079711323 2018-02-20
  • 5699551322 2018-02-20
  • 9911371321 2018-02-20
  • 3873471320 2018-02-20