TFDx @ DTW ’19 – Get To Know: Big Switch

In the final post of this series ahead of TFDx @ Dell Technologies World 2019, we will be focusing on Big Switch Networks, their evolving relationship with Dell EMC and their presence here at the show.

I’d like to start out by acknowledging that partnerships are a dime-a-dozen, and many vendors tentatively put their “support” behind things just to check a box and say they have a capability. In addition, I have noticed a not-uncommon discrepancy between the messaging contained in vendor marketing materials and the messaging (or, general enthusiasm) of their SE’s. As a partner peddling vendor wares, this type of scenario is less than inspiring.

Fortunately, that does not appear to be the case with Dell EMC and their embrace of Open Networking. In discussions with multiple levels and types of Dell EMC partner SE’s, it is consistently mentioned as something that gives them an edge vs. other vendors, and it appears to be a point of pride. They are all about it.

Within this context, the recent news of the agreement between Dell EMC and Big Switch to OEM Big Switch products under the Dell EMC name makes a lot of sense. Dell EMC will provide the merchant-silicon based switching, Big Switch will provide the software, and the customer will get an open, mutually-validated and supported solution.

The primary components within this solution are Dell EMC S-Series Open Networking switches and Big Switch Big Cloud Fabric (BCF) software, so let’s talk a bit about those next.

Dell EMC S-Series Open Networking Switches

For purposes of brevity, I am going to focus on the switch type most relevant to the datacenter, the newly released line of 25Gbit+ switches. According to Dell EMC contacts, the per-port price is very competitive compared to the 10Gbit variants, and adoption of 25Gbit (and above) looks to be accelerating.

Within this lineup, there are a number of port densities and uplink configurations available, including the following:

  • S5048F-ON & S5148F-ON: 48x25GbE and 6x100GbE or 72x25GbE
  • S5212F-ON: 12x25GbE and 3x100GbE
  • S5224F-ON: 24x25GbE and 4x100GbE
  • S5248F-ON: 48x25GbE and 6x100GbE
  • S5296F-ON: 96x25GbE and 8x100GbE
  • S5232F-ON: 32x100GbE
  • S6010-ON: 32x40GbE or 96x10GbE and 8x40GbE
  • S6100:32x100GbE, 32x50GbE, 32x40GbE, 128x25GbE or 128x100GbE (breakout)

Obviously, it’s always impressive to see the specifications associated with the top model in a product line. With up to 128×100 Gbit ports available, the S6100 is no exception.

What stands out to me, though, is the inclusion of a very interesting half-width 12-port model. With this, a customer can power a new all-flash HCI (or other scale-out) environment of up to 12 nodes and occupy only 1U of rack space for networking. All while retaining network switch redundancy.

With compute and storage densities where they are in 2019, you can house a reasonably-sized environment with 12 HCI nodes. It can also be useful to keep HCI-specific east/west traffic off of the existing switching infrastructure, depending on the customer environment.

Not all customers in need of new compute and storage are ready to bite the bullet on a network refresh or re-architecture, either. This gives solution providers a good tool in the toolbelt for these occasions, and other networking vendors should take note.

The star of the show is…a 12-port switch? In a way, yes.

Common within the Dell EMC S-Series of Open Networking switches is the inclusion of the Open Network Install Environment (ONIE), which enables streamlined deployment of alternative OS’es, including Big Switch Networks BCF. Dell’s own OS10 network OS is also available for deployment, should the customer want to go that direction in some instances.

Underpinning all of this is merchant silicon, so customers don’t need to worry about lack of hardware capability, vendor expertise or R&D as much here. This approach allows specialist vendors like Broadcom and Bigfoot to focus on what they do best, chip engineering, while Dell EMC and software vendors like Big Switch can focus on how to get the most from provided capabilities. Hardware parity also brings costs down and encourages innovation through software, which is a beneficial thing.

Although a full analysis of Dell’s use of merchant ASIC’s in their networking gear is outside the scope of this post (and my wheelhouse), I’d recommend checking out this analysis on NextPlatform for more info. I think it’s safe to say the arguments against “whitebox” and for proprietary solutions are beginning to lose their potency, though.

An Open Networking switch equipped with ONIE doesn’t move frames by itself, though. For that, you’ll need an OS like Big Switch BCF, which we’ll touch on next.

Big Switch Networks Big Cloud Fabric

Big Switch Networks Big Cloud Fabric is available in two variants: Public Cloud (BCF-PC) and Enterprise Cloud (BCF-EC). Since we are focusing on the deployment of Big Switch as part of a Dell EMC Open Networking solution, we’ll keep things limited to BCF-EC, for now.

At its foundation, BCF is a controller-based design that moves the control plane off of the switches themselves and onto an intelligent central component (controller). This controller is typically implemented as a highly-available pair of appliances to ensure control services are resistant to failure.

As network changes are needed throughout the environment, these are made in automated fashion through API calls between the controller and subordinate switches. These switches are powered by a combination of merchant silicon and the Switch Light OS and are available from a number of vendors, including Dell EMC.

Big Switch diagram showing an example leaf-spine architecture powered by Big Cloud Fabric

There are a number of benefits associated with the resulting configuration, including simplified central management, increased visibility into traffic flows and behavior, and improved efficiency through automation. One great use-case for this type of deployment is within a VMware-based SDDC. A solid whitepaper expanding on the benefits of the combined Big Switch and Dell EMC networking solution within a VMware-based virtualization environment can be found here.


All in all, I think this OEM agreement is good news in support of competition and customer choice. It’s also encouraging that Dell EMC appears to be bought-in to Open Networking, both in word and in practice.

Despite this, I still think Dell EMC could do a better job of promoting and selling their network line. It’s not a one-way street, though. It’s also the responsibility of partners (all architects and decision-makers, really) to re-evaluate solutions as they evolve and adjust previous conclusions, as appropriate. Increasingly often, you can come up with a good answer without using the C-word (Cisco).

I look forward to talking more with the Big Switch team about BCF on Dell EMC Open Networking switching during their session at TFDx this Wednesday at 16:30. Be sure to check out the livestream and submit any questions/comments on Twitter to the hashtag #TFDx.

TFDx @ DTW ’19 – Get To Know: Liqid


It’s been said that innovation begets innovation, and Liqid has developed a very interesting composable platform that builds upon recent developments in the areas of interconnect and fabric technology. But before we get into the technical specifics, let’s quickly touch on a few of the drawbacks of traditional infrastructure that composable solutions look to improve upon:

  • Procuring, deploying, and managing datacenter infrastructure is labor-intensive and can be complex.
  • Bespoke configurations, common lack of centralized management and automation capabilities can impact consistency and reproducibility.
  • Statically-configured resources can be over or under utilized, either leading to performance issues or preventing maximum return on investment.
  • Operations teams responsible for said infrastructure can struggle to be as responsive as their application owners and developers would like.

Composable solutions, on the other hand, take a building-block based approach, where resources are implemented as disaggregated pools and managed dynamically through software.

Depending on which vendor you ask, the definitions of “composable” and “disaggregated”, as well as the types of resources available for composition, will vary. The common theme here is that we are moving away from static configurations toward a systems architecture that is dynamically configurable through software.

Liqid, as you will see, has a very different take on composability than HPE and Dell, but that doesn’t mean HPE and Dell hardware can’t be part of the Liqid solution. Thus their presence at Dell Tech World 2019, I suppose. 🙂

At its core, their solution consists of three primary components: the Fabric, the Resources, and the Manager. We’ll take a closer look at these next.

The Fabric

What is the self-described “holy grail of the datacenter fabric” that makes the Liqid approach to composability possible? Infiniband? No. It’s not Ethernet, either. It’s PCIe.

Liqid argues that because PCIe currently is, and has been, leveraged heavily in modern CPU architectures, it is uniquely positioned to connect compute to peripheral resources across a switched fabric. This architecture decision allows Liqid to avoid additional levels of abstraction or protocol translation, which at a minimum keeps things more elegant.

At its core, the fabric is powered by a 24-port PCI Express switch, with each port being capable of Gen3 x4 speeds. This equates to a per-port bandwidth of 8GB/s full-duplex and a total switch capacity of 192GB/s full-duplex. Devices can be physically connected via copper (MiniSAS) or Photonics, proving some flexibility in connecting the required resources.

Overall, the approach of using a native PCIe fabric allows Liqid to be one step closer to true composability than the bigger players, because a larger number of resource types can be pooled and dynamically allocated. More on this in a moment.

Overview of disaggregated resources and their relationship to the Liqid PCIe fabric.

The Resources

In reading over the benefits of available composable systems, it’s easy to get the impression that compute, network and storage resources are the only relevant resource types. HPE Synergy, as an example, introduces hardware resources in the form of an improved blade chassis (frame) with abstracted, virtual networking and internal storage presented over an internal SAS fabric.

Resources can be dynamically and programmatically managed, but the scope of the sharing domain is limited to the frame. Although this limits flexibility, there are still a number of benefits to HPE Synergy vs. a traditional architecture. This is just one interpretation of what composable should look like.

Liqid takes a different approach and deploys pools of resources using commodity hardware attached to their PCIe switch fabric. Because of the use of PCIe, a number of additional resource types are available for composition, including GPU’s, NVMe storage, FPGA’s and Optane-based memory. Compute resources are provided by commodity x86 servers containing both CPU and RAM. This additional flexibility is a primary differentiator for Liqid vs. the other available composable solutions.

Commodity resources attached to x86 compute over the Liqid PCIe fabric

The Manager

Bringing the solution together is the management component, the Liqid Command Center. This provides administrators with a way to graphically and programmatically create systems of the desired configuration using compute, storage, network and other resources present on the fabric. In short, the features you’d expect to be present are here, and it looks like some attention has been paid to the style of the interface. A brief demonstration is available on YouTube and gives a good preview of the look/feel and capabilities:


Although there’s a significant amount of marketing fluff to sift through at times when looking into composable solutions, I don’t believe composability is just another meaningless throw-around term.

There are benefits to be had, both on the technical and operational side of things. Based on my initial research, the Liqid approach appears to be a step in the right direction. However, achieving true composability looks to be a work in progress for all solution vendors.

I look forward to talking with the Liqid team about that point and more this Wednesday 5/1/19 at TFDx. Check out the live stream at 13:30 using the link below, and feel free to send your questions via Twitter using the hashtag #TFDx and #DellTechWorld.