In preparing for this version of my blog, I went back and read the April 2020 version in which I talked about Microsoft introducing or renaming their Microsoft 365 product line. I started off that blog making light of the “Bizzaro World” I lived in at the time. The Covid-19 Pandemic was just in full swing with the 30 days to stop the spread shutdown in full effect. All our lives have been fundamentally changed as we navigate through the daily effects the Pandemic continues to bring, truly a “Bizzaro World”.
Also, in that blog, I discussed how Microsoft is pushing its customers to the cloud by making some features in their cloud version of software not available in the on-premises version of software.
Microsoft states in their Office 2019 for Windows FAQ…
“Office 2019 (for both Windows and Mac) is a one-time purchase and does not receive feature updates after you purchase it. It includes a meaningful subset of features that are found in Microsoft 365, but it’s not part of Microsoft 365. Office 2019 will receive quality and security updates as required.”
In September 2020, Microsoft announced they would be ending the Open License program at the end of 2021. For those that might not be familiar with the Open Programs, there are three:
“Simplifying the purchase experience for our customers is a core element of making it easier to do business with Microsoft. It requires a change in the way we’ve engaged with you, and in how you buy and manage your software licenses and subscriptions for online services.”
“In September 2020, we announced changes to the Microsoft Open License program with the introduction of perpetual software license purchases through the new commerce experience. If you’re a small or midsized customer, you can now buy software licenses from partners participating in the Cloud Solution Provider program. As a result, we’ll be ending purchases through the Open License program on December 31, 2021. If you have a small or midsize organization with little or no IT resources, Microsoft partners can provide expertise and services and build unique solutions with the latest Microsoft services and offers.”
Previously, Open License purchases were made through CSP Partners, such as Zunesis. The CSP model was designed for the partner to add value to its customers’ cloud experience via support, billing flexibility and advice. The customer effectively has a pay as you go consumption arrangement through the partner, rather than directly with Microsoft.
CSP has numerous benefits to customers. One is Flexibility. This is where you pay for what you use and have the ability to add/remove licenses on a monthly basis. Other features include Monthly Billing, no upfront costs, benefit from the Partner’s Licensing expertise, and discounts off MSRP to name a few.
It means these as-needed software purchases without SA will be subscription-based purchases through Microsoft’s Cloud.
Does this mean you have to “move” to the cloud?
No, you get all the features you are used to, with the flexibility of the CSP program. The software is downloaded and installed the same way it always was.
In fact, it could be argued that licensing through CSP has better benefits than SA.
There are several options for purchasing Office via CSP, some include online services, some are software only. Plans start as low as $5 per month. Each one of these plans include always up to date software. When updates are released, the user is prompted to install them. A few examples:
Windows Server and Microsoft SQL Server can be purchase through CSP. These licenses can be used both on-premises and in Azure, saving 40% in Azure using the Hybrid Benefit and save more as there is no need for Client Access Licenses in Azure. Customers are billed monthly for the CSP licenses, even though the server purchases are annual.
Microsoft continues to push customers toward their cloud offerings. There are a lot of combinations when looking at Microsoft Licensing through CSP. A CSP Partner can assist in finding the most cost-effective solution for your organization.
August 11, 2021
DENVER, CO – Zunesis, a leading provider of Information Technology (IT) infrastructure and consulting services announced a merger with Absolute Performance, Inc. (“API”). Since its founding in 2004, Zunesis has helped large enterprises, mid-market commercial and public sector clients with their IT infrastructure strategy and data center needs. API is a leading provider of end-to-end IT managed services to enterprise and middle-market clients across the United States.
Steve Shaffer, CEO and founder of Zunesis, said “Absolute Performance has the right vision, strategy and culture to deliver world class IT services throughout the industry. I am thrilled about the partnership with API and look forward to the next chapter of growth for Zunesis. I believe this combination provides new and exciting opportunities for Zunesis employees.”
Shaffer further explained, “for existing Zunesis customers, nothing changes in terms of support, personnel or current contracts. The Zunesis name, core competencies and personnel will remain intact. Zunesis will now be able to offer world-class 24/7 managed services, help desk, and cyber-security services.”
“Zunesis has deep expertise in the IT infrastructure strategy & planning market and a long list of very happy customers, and we are thrilled to bring Zunesis together with API’s leading managed services platform – together, the companies will deliver a best-in-class suite of IT managed service solutions to our combined customer base,” said Scott Shafer, CEO of Absolute Performance.
About Absolute Performance:
Absolute Performance, Inc. (“API”) is a leading provider of IT management services with customers ranging from Fortune 500 to mid-market organizations across a wide range of verticals. API provides an end-to-end suite of IT services including cybersecurity, 24/7 monitoring and management, IBM infrastructure management and modernization, cloud hosting, and IT outsourcing. API is based in Broomfield, CO.
For additional information on Absolute Performance, see https://www.absolute-performance.com/
About Zunesis:
Zunesis’ vision is to make the lives of its customers better by managing their IT strategy and management needs. Zunesis is a prominent IT Solutions provider in enterprise infrastructure, Microsoft Solutions, edge computing, networking and on-premise and cloud solutions. Zunesis services both the public and private sectors customers and provides all services to help assess and improve any technology environment. Zunesis is based in Englewood, CO.
For additional information on Zunesis, see https://www.zunesis.com/
Assessments are a necessary evil for an IT Department. Egos are hurt, exposure of negligence and complacency are all deliverables of a thorough assessment. But we’re going to look at it in a different way.
An IT Department can be considered the Heroes of an organization because they realized an assessment(s) needed to be done. Preventing a catastrophic breakdown or failure of an organization’s infrastructure or by saving an organization millions of dollars, the IT Team can leverage the C-Suite for that pool table in the breakroom without much resistance.
We all wear many hats in an organization and can be considered the Jacks of all trades and Masters of none. An individual who’s heading up the IT Department may have a Networking core competency but have little or no knowledge of a Storage competency. And, there’s the chance an individual and only that individual makes up the entire IT Department. This can be the case for SMB companies. Enterprises have a few folks, but even their bandwidth is not what it should be.
Gaps in knowledge ranging from Networking, Compute, Storage, etc. can jeopardize an organizations’ infrastructure. This can cause significant trouble with product outputs. However, being proactive instead of being reactive can help minimize the opportunity for failure or loss to occur.
Assessment is a very broad term. There can be levels and micro-levels of assessments within an IT Infrastructure.
Here’s a very high-level summary of the types of assessments that are available via Zunesis at the click of the ‘SEND’ button on a product / service inquiry form:
1. IT Infrastructure Assessment – Experts will assess the current IT infrastructure and deliver a report detailing observations regarding hardware, software, and the business processes impacted by the organization’s IT environment. Recommendations and potential solutions should also be part of the final deliverable.
2. Data Management Assessment – Data Management Assessment Service can be used to focus separately on Production Data, Archive Data, Backup/Recovery, or it can encompass all three. Part of the perks of having this information would be to document current data management practices as well as short-term and long-term objectives.
3. Recurring Data Center Assessment & Advisory Service – This is a review of the IT Infrastructure Assessment, but it would occur on a quarterly, bi-annually, or yearly cadence. Benefits include lower support and maintenance costs, greater control over the existing environment and other attributes which lead to high performance without breaking the bank.
4. VMware Assessment Services – Deliverables include documentation on current environment, observations about current VMware use and short-term recommendations and areas of opportunity. This information will provide a clearer picture of the environment’s long-term strategy and cost efficiency.
5. Wireless Site Survey – This assessment should be done by all organizations (i.e., Government agencies, Education, office buildings, etc.). A Network Engineer will import floor plans into a tool and draw walls on the map to give them an idea of what their coverage area will look like. Or if already installed, the site survey will validate a recent wireless network installation.
6. Ransomware Recovery Preparedness & Risk Assessment – There’s no need to elaborate on this assessment. To be blunt, if an organization doesn’t feel this is necessary, then they better be prepared to lose all their data and pay a fortune to get it back. And, there’s a chance an organization may never get their lost data back. People will lose jobs over this if this assessment is not done and done soon.
7. Firewall Assessment – This provides the overall utilization of an organization’s current firewall and their adherence to industry’s best standards through a Network Vulnerability Assessment. Recommendations, best practices, snapshot of existing firewall layout, potential of what the layout can be in the long-term are just some of the attributes a company can glean from this assessment.
Now that these assessments have been identified and the importance of each, organizations need to think operationally, tactically and strategically about the consequences and costs of not undergoing these assessments. Potential Problem Analysis consists of identifying the problem before it actually occurs. This type of thinking is true in any infrastructure environment and should be implemented immediately. Band Aide’s and Duct Tape can only go so far.
Again, being proactive rather than reactive, an organization can prevent significant down-times, reduce costs by protecting current hardware, avoid ransomware, avoid closure, save jobs and many other elements which will lead to continued operation time and an increased/valid sense of security.
Having started my IT Career in the 80’s, I’ve had a front row seat to the ever-evolving landscape that makes up IT Infrastructure. In the days of centralized systems, with Dumb Terminals, monitoring and managing systems was simple relative to today’s environments. As distributed computing made its way into the data center and across desktops, monitoring and management became far more challenging. Troubleshooting, software/hardware upgrades, and deployment often means visiting each desktop in the organization.
As centralized and distributed infrastructures began to converge over time, we never got back to the simplicity of centralized systems. We saw day-to-day monitoring and management improve with centralized software distribution and updates, remote desktop access, centralized alerts and notifications, etc. However, the management solutions that evolved to support a world in which Hypervisors, co-location, and multi-site infrastructures rule, have themselves become large, complex, infrastructures to deploy and maintain. Today, managing the IT Infrastructure means dealing with a multitude of device managers and monitoring tools across a siloed environment of storage, compute, network switches, and firewalls.
Our IT Infrastructures are becoming more diverse and geographically distributed than ever before. It’s no surprise that we are now starting to see solutions that simplify the Monitoring and Management experience. These solutions are going to be mandatory as IT Infrastructure continues its evolution to a hybrid, compute anywhere landscape.
Hewlett Packard Enterprise has embraced the idea of managing and monitoring a compute anywhere environment. They have underscored their commitment based on tools like InfoSight. They have recently announced new solutions in the spring of 2021. To get to the point where they are now, there were many HPE developed technologies and hardware/software acquisitions along the way.
One of their most visible and telling acquisitions was Nimble Storage. While the Nimble Storage technology has been a solid solution by itself, we were told from the start that HPE really purchased Nimble because of its InfoSight AI/predictive analytics platform.
In 2017, InfoSight was a tool used to monitor and report on Nimble storage arrays from anywhere you could access a browser. But, at the time, Nimble was the only device included. Today, HPE InfoSight has expanded its use cases. Most all HPE Storage and Compute as well as the virtualized environments are supported by InfoSight.
InfoSight uses cloud-based machine learning to build Global Intelligence and insights for IT Infrastructure. The platform simplifies IT operations by predicting and preventing problems across the infrastructure stack. It makes decisions that optimize application performance and resource planning. This intelligence is based on telemetry data from many of HPE’s global installed base. I can personally attest to the power and usefulness of InfoSight. It has helped many of our clients troubleshoot issues and plan for expansion using the information provided through HPE InfoSight.
HPE continues to grow the capabilities of HPE InfoSight predictive analytics and monitoring across their compute and storage solutions. They have also been working on tools to improve the deployment, provisioning, and management of those solutions. Starting in April of 2021, HPE has made announcements for solutions built on a cloud-native architecture. It manages infrastructure components through a SaaS-based control plane that abstracts infrastructure control from the physical infrastructure.
In April, HPE announced The Aruba ESP (Edge Services Platform), designed to address fragmented network operations and simplify the network management lifecycle. ESP converges the management of wired, wireless, and WAN networks across campus, branch, remote worker, and data center locations. It will be no surprise, to those who have managed Aruba environments, that the Unified Infrastructure announced with ESP is based on Aruba Central, a cloud-native, microservices-based platform that has been part of the Aruba portfolio for some time.
With the inclusion of ESP, Aruba Central provides a full range of management services for the network.
To continue the theme of SaaS management solutions, in May, HPE announced their Data Services Cloud Console. Data Services Cloud Console is based on the Aruba Edge Services Platform. Because Data Services Cloud Console is delivered as SaaS, there is no software to deploy, manage, or maintain. You can constantly stay current on the latest software features, without any action or involvement required.
Data Services Cloud Console (DSCC) is a subscription service integrated with the new Alletra platform and will also support Primera and Gen 5 Nimble Arrays. It is deploys, provisions, and monitors supported storage arrays through role based access controls. DSCC delivers global unified management. It enables customers to manage and monitor geographically distributed systems across edge to cloud from a single web interface. So, managing hundreds of systems is as simple as managing one.
In June 2021, HPE announced an expansion of the Cloud Console with its Compute Cloud Console solution. Like ESP and DSCC, Compute Cloud Console is another part of the SaaS platform that will allow you to manage your Compute environments from anywhere and wherever they are across your infrastructure.
Hybrid infrastructure is here to stay. It is essential that we find ways to deploy, manage, and monitor infrastructure without proliferating the siloed tool sets and manual processes that have become common in geographically limited environments. Since our infrastructure is geographically dispersed, it is likely that those who manage that infrastructure will also be spread across, the country, and around the globe. So, role-based, self-service deployment, management and monitoring should also be part of how we plan to support our IT Infrastructure.
With InfoSight, Edge Services Platform on Aruba Central, Data Services Cloud Console and Compute Cloud Console, HPE has provided a suite of tools that will support our journey to the next evolution of the IT Infrastructure landscape.
Contact Zunesis to find out more about the solutions discussed throughout this blog.
When you want to take on new technology these days, the options seem endless. The As A Service model is the leading trend in modernizing your IT environment. Why not pay for what you use each month rather than investing in an expensive piece of hardware. Capex has gone. Opex is the future of budgets and is rapidly changing. Who wants to drive with gas when electric is so much more efficient? This train of thought is driving (pun intended) how IT professionals are thinking and responding to the services offered.
Disaster Recovery often gets put on the back burner. Storage and compute have always been the exciting leaders in the data center. Though in a time when cyber threats and ransomware are on the rise, disaster recovery is taking center stage. To make everyone’s life easier, more efficient, and profitable, Disaster Recovery as a Service (DRaaS) has been at the forefront of OPEX budgets.
This cloud computing and backup service model uses cloud resources to protect applications and data from disruption caused by a disaster. An organization that has a complete system backup allows for continuity in the event of failure. A DRaaS solution provides an easy way to move a production workload to the cloud. Once that instance is deployed, it can act as a sandbox for further experimentation. Necessity is the mother of all inventions, but experiment builds the momentum for innovation.
How fast can your organization recover from the moment of a disaster to the moment you return to regular operation? Businesses today have no tolerance for downtime. DRaaS provides a critical bridge, allowing companies to operate remotely while normal processes are restored.
While natural disasters are commonly associated with the need for DRaaS, five of the most common reasons an organization uses DRaaS are:
How much can you save is the new mantra on every organization’s mind while becoming modernized! Disaster recovery is often seen as a burdensome cost when it should be thought of as an investment. For any organization considering transitioning to infrastructure as a service (IaaS), DRaaS can act as a stepping stone to full virtualization.
Carbonite Recover is the DRaaS offering that can help you achieve all these goals. While securely replicating critical systems from a primary environment to the cloud, they ensure an up-to-date secondary copy for failover at any moment. Who doesn’t want to minimize downtime as well as cost? With DRaaS you pay for what you use, when you use it, not for idle resources.
Carbonite’s Recover allows businesses to enjoy all the benefits of resilient IT without owning the hardware or being responsible for maintenance. Modernizing or minimalist, both of which recognize that less means more. Who doesn’t want the freedom of less responsibility!
Today, it is common to combine modern and legacy systems operating side-by-side. Not all DRaaS vendors support legacy systems, but Carbonite continues to be an industry leader in supporting many different legacy platforms. When protecting your environment with Carbonite, you can also count on them to support other platforms such as:
Carbonite’s DRaaS advantages are built-in, allowing multi-sites 100 percent cloud computing. The resources are replicated to many different sites to ensure continuous backup if one or more sites are unavailable. Depending on the customers’ requirements, the ability to be granular or comprehensive can reduce cost with flexible protection.
Not only does Carbonite support a series of legacy and cloud-based platforms, but they also offer more control than competitive ISP solutions. They also provide flexible failover options that don’t require spare machines or extra fees, along with 24/7 phone support. DRaaS is one of those things you can’t afford to not have.
See how one retail chain stays in control with Carbonite Recover in this case study.
Zunesis partnered with Carbonite many years ago, not just as a reseller but as a customer. We have relied on the many advantages of using their products. We can attest to their cutting-edge technology, quality customer service, and competitive pricing. For more information on all the Carbonite products, contact us today.
The last year has been wild. Organizations across the globe have had to adapt their operations, and methods of doing business, to accommodate the various challenges and changes brought on by a post-Covid world. The problem, however, is IT needs have not gone away while these other changes have had to take place. On the contrary, technology needs are on the rise more than ever. But how can companies and organizations expect to procure these new IT solutions with a fixed budget and growing, often unexpected, technology needs? Thankfully, HPE has got you covered in MULTIPLE ways! Check them out below!
From helping release capital from existing infrastructures to deferring payments, and providing pre-owned tech to relieve capacity strain, Hewlett Packard Financial Services has a variety of financial and asset lifecycle solutions. You can leverage these solutions to support your needs today and position your organization for long-term success.
Here are just a few exciting promos currently being offered by HPEFS to help customers secure technology in the most cost-effective way possible.
Best of all, HPE Financial Services is not dependent on an HPE hardware sale. Does your company need to secure a large software purchase? Procuring a new product from a different manufacturer? No problem! HPEFS can apply its flexible payment solutions to all your IT projects! Just provide a quote or proposal and they will help you get started!
So now that we have discussed procuring traditional IT Solutions using some alternative financing options, let’s talk about another great offering from HPE. It’s called HPE GreenLake Cloud Services and it rethinks the technology procurement process entirely. Rea don!
HPE GreenLake Cloud Services is a new consumption-based IT model that marks a paradigm shift in the way we operate IT. It focuses on outcome-based consumption, while radically simplifying IT and freeing up resources. Best of all, it really does deliver the best of both public cloud and on-premises IT—so you don’t have to compromise. Payment is simple and based on a single pay-per-usage metric that is relevant to the particular solution and your business.
Here are some key highlights:
HPE GreenLake is:
So the moral of this blog is: don’t let the ever-changing IT infrastructure procurement process scare you. Above are just a few ways in which we can help you reach your project goals on any budget. Reach out to a representative at Zunesis today to help you get the process started!
Last year was characterized by a collective, sudden shift to a remote workforce. 2021 is the year of the hybrid model. As some employees return to work safely, others may remain home or a mix of both.
Surprisingly, many organizations are discovering that concerns about potential lost productivity were exaggerated. It is now believed that one-quarter or more of all workers may become predominantly home-based. One of the many consequences of this change is an increase in cybersecurity risks. There is a complexity of implementing effective security to protect computing infrastructure.
As always, vigilance by the security professionals tasked with protecting networks from intrusion is the paramount defense. The basic formula is simple. Cybersecurity is based on defining what needs to be protected and at what points the protection is required. However, the explosive growth of remote workplaces has strained the information technology infrastructure of most organizations.
A basic defense tactic is to limit the number of potentially vulnerable attack surfaces accessible to a bad actor. With remote work, attack surfaces may be multiplied. A workforce that previously accessed organizational data and code within an organization’s well-protected networks now expect the same level of access from outside of those networks. The obvious counter to this is to require access through encrypted VPN (Virtual Private Network) connections.
Adding to the risk equation, many remote workers use personally-owned devices while “on the job.” An organization’s well-protected network is potentially compromised by insecure access from computers, smartphones, and tablets beyond the control of the IT security team. Remote workers also are likely to share their Internet access points with family and/or friends. This introduces still more non-secured devices to a shared connection.
Other pandemic-related challenges faced by security and IT professionals involve changes in supply chain relationships. The introduction of new business partners to fill gaps in a supplier network may inadvertently lead to oversights in vetting these partners and enabling secured communications links.
In manufacturing organizations, accelerating the digitalization of ICS (Industrial Control Systems) also is an issue. Remote management of ICS requires connectivity to many devices that previously were secured, in part, by isolation. However, improvements to operational agility realized as business models adapt to make it likely that they will become ingrained practices. Unless, of course, a future security failure causes a snapback.
With the trend clearly pointing to workplaces where remote access is the rule, how can organizations manage the increased threat level? Cybersecurity and IT professionals recommend starting with reinforcing basic security practices to adjust for a remote workforce. They note that workers should be wary of information requests and always verify the authenticity of the source. They should make sure that all devices with network access have up-to-date software and patches, and employ dual-factor authentication for devices whenever possible. Most importantly, experts note that even in a post-pandemic era, cybersecurity is shifting away from a perimeter-based model where all assets inside a network are trusted. Instead, zero-trust architectures. This is where individual, devices and applications are always authenticated and authorized before gaining access to a network, need to become the norm.
The recurring theme of these recommendations is authentication of sources, of users, and of devices. In the last decade, cybersecurity professionals have reached a consensus that authentication schemes should be based on a protected hardware element. The purpose of what is called a “secure element” is to provide a protected root-of-trust that can be embedded in each device capable of being connected to a network (whether a private network or the Internet).
The pandemic’s impact on remote work is an acceleration of a long-term trend that will continue for many years. The evolution of remote workplaces is one of many adaptions made possible by the emergence of connected, smart devices in nearly every aspect of people’s lives. The “Internet of Things,” which is likely to enter an even more dynamic stage of growth as 5G connectivity will make it even easier to link devices together, extends cybersecurity concerns for organizations and individuals alike.
Ultimately, the billions of connected devices in the Internet of Things also represent a multitude of potential attack surfaces. In the smart home of the future, remote workers may ask their smart speaker or smart TV to access files. It will be up to cybersecurity professionals to protect their networks from access by unsecured devices. A root of trust in every device will make what some might think an impossible task possible.
I love football. I watch all levels, from my kid’s 3rd and 4th grader team all the way up to the NFL. The game to me is fascinating. Not just the game itself, but all of the hype and trappings. This time of year, the NFL is not playing or practicing. Instead, the teams are focused on building the best team for the upcoming year. What that means is they are looking to sign players to fill roster spots. Maybe even more importantly, pouring over all the data they can find for the upcoming draft class (starting on April 29th). These teams interview, watch college game-film and run potential players through a series of drills to understand their athletic ability.
Like all of us, the NFL has had to tweak how they do all of this prep because of the current pandemic. In years past, the NFL would gather in Indianapolis, Indiana. They would send most, if not all, of the prospective draft players through the “scouting combine”. This was a time when the coaches and league personnel could all huddle together. They would watch how each player performed at the individual tasks they were given. (This always seemed weird to me, that they would judge players on a bunch of individual tasks. Especially, when these players were auditioning to play a very intricate team game, but that is a whole other blog entirely).
With the arrival and continued issues that COVID-19 has brought, the NFL has canceled this year’s combine and most of the in-person meetings. Instead, they will have to sift through, critique, and break down these players virtually. This is going to be a HUGE change for these organizations.
Some of the personnel have done things one way their whole career and they will now be asked to change and adapt. Still, they need to make their analysis at a very high level. Sound familiar? Basically, the NFL front offices are now catching up to what we have all been doing for the last year! As has been the experience in the business world, the NFL will go through a few growing pains. In the end, they will find a way to make it work, and may even be better for it. Again, it will mirror what the rest of us have already been through.
A year ago, in April, most of us were at a point of worry, confusion, and fear. Would our jobs survive? Would I get sick? How is this going to affect my family? Will anything be the same again? A year later, we are still dealing with some of these questions. But, for the most part, we have adapted. We have learned how to work differently and interact in ways that were for some, completely foreign. Some of us have learned how to both work and teach, as our children have also been remote for some or all of the school year. Sure, as a society, we still are having our struggles, but slowly and surely we are finding a way.
In the IT industry, some of us went from working in and looking at our data centers on a daily basis, to now being physically in front of them only rarely. We have learned the power of remote tools. In the past, it may have only been a feature that was ignored. We have also learned (or maybe re-learned) that change in where and how we work has stressed our already overtaxed security policies. Many of our customers have come to understand that security is not just the forest (overarching security strategy). It is the trees as well (basic building blocks of the security strategy).
One of the security features we have been championing with our customers lately is server security. So much has been made of the network or the individual end-user devices, it is often forgotten just how important the server can be. A good example of some industry-leading server security is what Hewlett Packard Enterprise has done with their GEN10 model. Here is a quick snapshot of what HPE incorporates into every one of their current server line:
As you can see, HPE has focused very succinctly on the product that is in each one of their servers, being known, trusted, and secure. Then, they offer several options for you the user to combat security intrusions as they happen. They handle things like Ransom attacks with excellent restore capabilities. While most server manufactures do their best to focus on security, we have found that some features on the list above only HPE will do. To me, it is also impressive that a company would work so hard to ensure the safety of a product that is theoretically already behind several other security layers. They understand that security is the job of every hardware component that is onsite.
The Covid-19 pandemic has changed the way we work and do business. Ultimately, we need to ensure we are changing the way we view our IT infrastructure as well. Attention to detail and vigilance will be a responsibility for everyone and everything that comes in contact with your infrastructure. Even though we have been adjusting for over a year, corporations like the NFL can show us that continued adjustment and adaptation are still going strong. HPE and all our partners continue to also change and improve their IT game!
Contact Zunesis to learn more about protecting your servers.
I have been getting a lot of questions recently about AMD and if it should be used in the data center. The short answer is YES. Since AMD announced the EPYC processors, they have been gaining market share in the data center. Hewlett Packard Enterprise recently announced the industry’s broadest portfolio of AMD EPYC™ processor-based solutions to power everything from the edge to exascale supercomputers. They have been breaking performance records running AMD on the HPE Cray Supercomputers.
AMD EPYC Series Processors help propel your modern data center workloads with leadership performance and advanced security features. AMD has announced the 3rd generation of the EPYC processor. It sets the performance bar to new heights. Built on the Zen 3 core and the AMD Infinity Architecture, the AMD EPYC 7003 series provides the best performance, highest I/O, and integrated security. The video below shows the announcement for the new AMD EPYC generation 3 processors.
The current Hewlett Packard Enterprise product portfolio is built on the AMD EPYC generation 2 processor which features the 7002 series processors. The 7002 is based on Zen 2 core, which delivers optimized performance-per-watt, large L3 cache for low latency access to data. These processors support up to 64 cores per socket, 128 threads, 4TB of DDR4 memory capacity across 8 memory channels, and 128 lanes of PCIe® 4.0 connectivity to reduce bottlenecks.
Based on the AMD Infinity architecture, the 2nd Gen AMD EPYC Processors are the first server processors featuring a 7nm hybrid multi-die design and PCIe Gen4 I/O. The AMD EPYC Family continues to offer the most I/O and memory bandwidth in its class.
AMD EPYC processors boast a set of advanced security features, called AMD Infinity Guard. This includes the AMD secure processor, Secure Memory Encryption (SME), and Secure Encrypted Virtualization (SEV). These features help minimize potential attack surfaces as software is booted, executed, and processes your critical data.
With Secure Encrypted Virtualization (SEV), AMD EPYC processors help safeguard privacy and integrity by encrypting each virtual machine. This aids in protecting your data’s confidentiality even if a malicious virtual machine finds a way into your virtual machine’s memory or a compromised hypervisor reaches into a guest virtual machine.
The HPE Server portfolio ranges from the low-end DL325 Gen10 Plus 1U server up to the HPE Cray EX supercomputer. There are 1 socket and 2 socket general-purpose server options, scalable building block options, high-performance computing options, and options built for AI and Deep Learning. Find out more about the HPE server product line with AMD EPYC processors here.
HPE will be announcing new products based on AMD EPYC Gen3 processors in mid-April 2021. More information will be available as it gets closer to the announcement date. According to the HPE Press release dated March 15, 2021, HPE has secured 19 world records in key areas for optimizing workload experiences. This includes achieving leadership positions in virtualization, energy efficiency, database analytic workloads, and Java applications. To date, HPE servers and systems using 2nd and 3rd Gen AMD EPYC processors combined to hold a total of 32 world records.
After the announcement, the full portfolio of HPE servers and systems supporting the new 3rd Gen AMD EPYC processor will include the following:
All new HPE Apollo systems with the 3rd Gen AMD EPYC processor will be available worldwide on April 6. All new HPE ProLiant servers with the 3rd Gen AMD EPYC processor will be available worldwide on April 19. Contact Zunesis, if you would like to learn more about these processors.
Tech Elite 250 List Honors the Highest-Achieving IT Solution Providers in Vendor Certifications
DENVER, CO, March 22, 2021
Zunesis, announced today that CRN®, a brand of The Channel Company, will honor Zunesis on its 2021 Tech Elite 250 list. This annual list features IT solution providers of all sizes in North America that have earned cutting-edge technical certifications from leading technology suppliers. These companies have separated themselves from the pack as top solution providers, earning multiple, premier IT certifications, specializations, and partner program designations from industry-leading technology providers.
Businesses rely on solution providers for an enormous amount of technologies, services, and expertise to help them meet today’s IT challenges — whether it’s a new implementation or digital transformation initiatives. To meet these demands, solution providers and MSPs must maintain high levels of training and certification from IT vendors and achieve the highest tiers within those vendors’ partner programs.
Each year, The Channel Company’s research group and CRN editors distinguish the most client-driven technical certifications in the North American IT channel. Solution providers that have earned these high honors — enabling them to deliver exclusive products, services, and customer support — are then selected from a pool of online applicants as well as from The Channel Company’s solution provider database.
Zunesis was founded in 2004 and for more than 16 years now, we have been focused on design, implementation, support and protection of our client’s IT Environments. Our team of IT Professionals average over 23 years of experience across all facets of the IT Infrastructure, including Compute, Storage, Backup/Recovery, Networking, Hyper-Visors, and Microsoft Server and Desktop. It is important that Zunesis stays on top of the latest technologies and implementations to service our customers.
“We are proud of the impact that Zunesis has had in the Rocky Mountain region with servicing our clients’ needs. ” commented Zunesis CEO, Steve Shaffer. ” “Since the beginning, we have believed that the client’s needs trump everything else and that making the lives of our clients better is a high and worthy calling.”
“CRN’s Tech Elite 250 list highlights the top solution providers in the IT channel with the most in-depth technical knowledge, expertise, and certifications for providing the best level of service for their customers,” said Blaine Raddon, CEO of The Channel Company. “These solution providers have continued to extend their talents and abilities across various technologies and IT practices, demonstrating their commitment to really conveying the most exceptional business value to their customers.”
Coverage of the Tech Elite 250 will be featured in the April issue of CRN® Magazine and online at www.CRN.com/techelite250.
About Zunesis
Zunesis, headquartered in Englewood, Colorado has been an HPE Platinum Partner for 16+ years. Zunesis has expert engineers in HPE server, storage and networking technologies along with common software applications like Veeam and Microsoft. Zunesis serves clients large and small but our sweet spot is the mid-market organization – the heartbeat of the US economy. Our mission is to make the lives of our clients and community better. www.zunesis.com
Follow Zunesis: Twitter, LinkedIn and Facebook
Zunesis Company Contact:
Rachael Stiedemann
Zunesis