I work in a lot of network environments and I see a lot of different approaches to security and networking. One constant I have found is that all IT professionals struggle to adequately identify and secure the devices that may be on their network. Aside from having insane levels of security and prohibitive onboarding practices for devices, it is almost impossible to dynamically assign network access without the use of a network access control solution. I will dive into the basics with my mostly vendor agnostic explanation.
At the most fundamental level, network access control systems are designed to help identify devices and users on your network and then do something with the identification. The solution often integrates with most directory or identity providers. It can be used for authentication, authorization, and access. (AAA) The system can leverage hard-coded attributes of the user or device and enforce a security posture to them. The NAC can also leverage other components like how the device is connecting, where the devices are connecting from, and other more nuanced dynamic characteristics of the connectivity and identity.
What the system does with that information is the most important part. As an example, it is rare that every person in a business network should have the same access. However, it is not rare that many people in a department or division would have very comparable access or restrictions. Similarly, devices that are generally doing the same job likely require identical network access. If the NAC can leverage user attributes like department or division then it can use similar attributes for a device. It understands that an HVAC air handler requires the same access as was assigned to the other air handlers that share the same device attributes.
With the use of what some vendors call roles with enforcement policies, one can automate the application of access based on identity. This allows for a scalable solution that can deliver the same application of security without the intervention of an administrator for every network connection. This concept is called role-based access.
I use the term application of security very loosely because each vendor accomplishes this task in different ways. Some will tunnel the user traffic to a firewall or wireless controller and apply stateful firewall policies to the user traffic. Others will change the network or VLAN the device is on so that the access is restricted to that network segment. Some rely on client-side software to enforce the application of a role assigned from the NAC.
A network access control solution is not the panacea that will make all your aliments cease. NACs by themselves hold a great deal of machine learning potential. It does require some semblance of initial administration to create the logic by which they will apply the enforcement of policies from. They are not infallible. Like any computing system, they do need some TLC when first deployed. Once they are up and running, you can sleep easier at night knowing that there is an intelligent application of security for anything connecting to your network.
I would recommend a NAC to anyone who runs a network with more than 100 users. If we assume that each person will likely have three computing devices, then that is 300 end-user devices. Not all of them being corporate-owned and managed, we would need to delineate access for each user group and device type. We will then need to ascertain if we want to apply different security based on how the device/user connects or if the device presents a risk to the company. This sounds like a lot of work and it can be. But, the work would only need to be done one time if we were programming logic into a NAC solution.
This is not meant as a comprehensive analysis of each of the major players in the marketplace. In fact, there are some decent open source and free NAC-like products out there that are relatively capable. Most of those do not support machine learning and cannot identify devices very well. However, they can provide authentication and authorization functions.
At the very least my hope was to impress upon anyone in the market that a NAC is a very necessary and essential component to your security arsenal. The days of having the same login for every switch and router are long behind us. Treating every user and device the same is also a thing of the past. If you desire the scalability that a network access solution provides, I suggest you reach out to your partner of choice. Inquire about what products they offer in this security space. Zunesis is available to help you find the right partner for your organization.
When making the decision on what is the best solution for your infrastructure, there are a few popular options available. Options include Converged, Hyperconverged, and Disaggregated Hyperconverged. Depending on the size and complexity of your environment, this will impact which infrastructure you may choose.
Choosing between the different available architectures means having a complex understanding of both the current deployments in your data center and the scaling factors impacting your organization specifically.
IT Sprawl is still a very real issue for data centers. It leads to increased costs, reduced efficiency, and less flexibility. A converged infrastructure (CI) helps with this by creating a virtualized resource hub. It increases overall efficiencies across the data center using a single integrated IT management system.
A converged infrastructure (CI) aims to optimize by grouping multiple components into a single system. Servers, networks, and storage are placed on a single rack. Replacing the old silos of hardware, convergence allows IT to become more efficient by sharing resources and simplifying system management. This efficiency helps keep costs down and systems running smoothly. On the flip side, it is expensive and not the most flexible in scaling.
While converged infrastructure is effective in small-scale environments, most mid-market and enterprise organizations are limited by this architecture. Its hardware is proprietary in nature and it ineffectively distributes resources at scale.
Hyperconverged infrastructure (HCI) was designed to fix the scalability issue, and it certainly improved things. Designed as a building-block approach, HCI allows IT departments to add in the nodes (i.e., servers, storage, or computing) as needed. It continues to simplify management by placing all controls under a single user interface.
Hyperconverged infrastructure, by leveraging commodity products across the board, significantly disrupted the financial dynamics. It was radically less expensive, at least initially than converged infrastructure while still providing most of the benefits.
Most organizations today use either a traditional CI or HCI Deployment. There are benefits and advantages to both.
While HCI has many benefits, there are some significant disadvantages. For quickly growing businesses that need an easy-to-manage architecture that embeds as many elements of modern-day computing – like disaster recovery, security, and cloud, this may not be the best solution. Hyperconverged solutions have use cases where they do not fit. This has caused problems for customers who disrupted operations by not realizing the impact some workloads would have.
While more scalable than CI, HCI still requires the interdependent growth of storage and servers. That’s a challenge with the types of workloads companies use today.
It’s hard to argue with the manageability and scalability advantages of traditional HCI platforms. IDC predicts that the HCI market revenue will grow at a CAGR of 25.2% to crest $11.4 billion in 2022. As HCI has matured, enterprises have been looking to use it to host a broader set of workloads.
There are still workloads whose performance, availability, and/or capacity demands encourage the use of an architecture that allows IT managers to scale compute and storage resources independently. A storage solution that is better for workloads whose growth is very dynamic and unpredictable.
Enter in the latest solution, Disaggregated Hyperconverged infrastructure. Disaggregated hyperconverged infrastructure (dHCI) combines the simplicity of CI and the speed of HCI to create a more resilient, evolved data center architecture. There are numerous benefits to dHCI. The biggest value proposition most attractive to users today is disaster recovery as a service or DRaaS.
While not every workflow can run on a hyperconverged infrastructure, they can on a dHCI. That’s part of what makes it appealing. It doesn’t come with the restrictions of its predecessors. Ultimately, disaggregated HCI leverages similar components to converged infrastructure but leverages modern infrastructure automation techniques to enable automated, wizard-based deployment and simple, unified management at similar costs to HCI.
With dHCI, IT teams are able to focus on support and service delivery while Artificial Intelligence (AI) takes care of infrastructure management. The rise in size and complexity of data centers means that such an intelligent solution will help firms get maximum Returns on Investment (RoI) in IT equipment.
dHCI is in demand for IT managers who want the simplicity of HCI and the flexibility of converged. dHCI is simple to deploy, manage, scale and support. It is software-defined so compute and storage are condensed and managed through vCenter with full-stack intelligence from storage to VMs and policy-based automation for virtual environments are integrated throughout.
HPE Nimble Storage dHCI pulls together the best elements of each type of infrastructure. Combining the simplicity of HCI management with the reliability, familiarity, and flexibility of scale of our beloved 3-tier architecture. It is essentially high-performance HPE Nimble Storage, FlexFabric SAN switches and Proliant servers converged together into a stack. Simple deployment, operation, and day-to-day management tasks have been hugely simplified with this solution.
The out-of-box experience requires very little technical experience to use and deploy the stack. Once up and running, day-to-day tasks, such as adding more hosts or provisioning more storage, are simple “one-click” processes that are simple and take up very little technician time. Storage, compute and networking can be scaled independently of each other. This further reduces the requirement for VMware/ Hyper-V licensing at scale. It reduces the costs as there isn’t a need to scale out all the components when you simply need more storage or compute.
The whole stack plugs directly into the HPE Infosight portal and support model. It automates simple support tasks so that 1st and 2nd line support are no longer needed to triage issues. dHCI plugs into this to bring this first-class support and analytics to VMware, Proliant, and FlexFabric as well as the Nimble Storage platform. With dHCI, it’s now possible to deploy an entire virtualization stack and have it monitored and supported 24/7/365 by skilled HPE engineers.
Want to learn more about these infrastructure solutions and discover which one is a good fit for your organization, request a consultation today with Zunesis.