A flood that damages mission-critical equipment is among the worst scenarios a data center can face, offering little hope for a quick fix. On April 13, 2015, the Zunesis team received a “first responders call” from a long-term public sector customer who had the misfortune to experience a broken water main that left their main data center under 2 feet of standing water.

The call for help came at 4 p.m. on a Thursday afternoon. Zunesis joined forces with HP and immediately responded to assess the damage. By the next morning, it was clear that the data center was living on borrowed time. With 18 racks of equipment compromised, it was imperative that the team immediately instigate a full Data center fail-over. Luckily, a secondary data center had just been completed, along with plans started for a full DR site; but those plans were stalled as the organization waited for additional funding to become available.

A two-fold plan was quickly put into place to swiftly expand the new data center to capacity while keeping the damaged data center up and running. The insurance company was immediately engaged to begin the replacement exercise while Zunesis and HP jumped in to “keep the lights on” in the existing data center.  Immediately needed were a large supply of extra drives; power supplies; cables; switches; and, most daunting, an emergency 120TB SAN array to move data to safety.

Zunesis Datacenter Disaster RecoveryBy Friday morning, emails had been circulated to the highest levels of HPE and Zunesis requesting emergency assistance. Zunesis quickly moved a 3Par 7200 array from its own internal lab to the customer site and expanded an additional 100TB in an emergency drive order to migrate data.   HP called in emergency supplies from all over the Americas. Extra drives, power supplies, cables, switches, and even servers arrived throughout the weekend. Within 48 hours, the teams had built up enough reserves to keep the data center live until all of the data could be migrated.
Although all IT organizations know that it’s critical to have a solid DR plan and failover Data center in place, the reality is that the Data center spotlight is on monitoring and compliance; and the loss of a main data center is crippling. The second most common cause of catastrophic failures (after electrical) in the Data center is water leaks. Taking the following three steps can help avoid a flood disaster:


  • Initial survey to ensure that the location of our data center is not in a flood plain and that the site is well protected from external water sources. Confirm that the fabric of the building is well enough designed and properly maintained to prevent the ingress of rain (even in extreme storm conditions). Check how sources of water inside the building are routed (hot and cold water storage tanks, pipe runs, waste pipes, WCs, as well as water-based fire suppression systems in the office space). Office space above a data center is almost always dangerous from a water ingress perspective.
  • Protection to ensure that any water entering the data center does not have the opportunity to build up and cause a problem. If you are really worried, install drains and a sump pump under the plenum floor. Ensure that the floor space is sealed and that all cable routes through partition walls are stopped up to be air and water tight.
  • Monitoring is critical in a data center – we absolutely need to be able to detect water under the plenum floor. Generally, water detection systems use a cable that runs under the floor and causes an alarm to be triggered if it comes into contact with water.

Click to find out more about our Disaster Recovery Assessments.

I talked about the critical importance of turning the 70/30 rule on its head in my last post, that the winners in your competitive set are the ones who are able to spend less time, money, and human resources maintaining their current IT environment and more of their resources using IT to create a competitive advantage. Companies that use IT to help out-invent, out-innovate, and be more customer-focused than their competitors will find themselves top of mind and top of heart for their customers.
One of the areas that is gathering great momentum in the IT industry is mobility. More and more businesses are realizing the benefits of employees and customers being able to access their own information from any device and from anywhere.
There are many benefits to mobility that are influencing this trend:

  • Portability: A number of recent studies point to the fact that extending access to critical work applications to all employees leads to greater employee satisfaction and a general sense of enablement and empowerment.
  • Availability: The ability to access content from anywhere leads to improved productivity and efficiency. It also leads to much greater responsiveness. From a revenue perspective, opportunities for customers to buy at any hour from any device can be a game changer for many businesses. On more than one occasion, access to my Kindle account on those 3AM over-caffeinated nights has certainly been good news for any Amazon stock holder.
  • Power Savings: While the momentum is building for the truly mobile workforce, initial estimates point to a potential for up to 44% power savings through virtualization and BYOD initiatives
  • Personal Ownership of Devices: Can be a real difference maker for businesses leading to reduced costs, higher adoption, and better employee engagement/higher employee compliance.
  • Employee Satisfaction/Retention: Flexibility to work when and where they choose is fast becoming a part of the overall compensation package for many workers. Multiple studies point to a significant number of younger workers considering flexible work locations and schedules, social media access, and ability to use their own devices being as critical in an overall compensation package as is pay.
  • Mobility-zunesis

As with most things in life, there is a flip side. Along with the upsides of mobility, there comes a number of crucial challenges:

  • Security
    • At the same time that employees and consumers are asking for easier, quicker access to their work applications and account information, high profile security breaches point out how sophisticated hackers have become at taking advantage of even the tiniest cracks in security.
  • Performance
    • For most employees and customers, the only thing worse than not having access to applications and accounts is having access hampered by a poorly performing application.
  • Cost Effectiveness
      • Ensuring secure data access with multiple devices from multiple locations at all hours of the day (and night) introduces challenges. How do you create a secure, high-performing environment that doesn’t erode both the cost savings associated with employees and customers owning their own devices and the revenue opportunities of allowing customers to interact with you anytime they choose?


 Choosing the right partner
How can you dial in on just the right mix of access, security, and cost effectiveness? The answer is working with a partner that can bring to the table the right mix of products, design expertise, and experience. Zunesis is a partner that can work with you to build a solution that spans virtualization, security, server, storage, and networking capabilities. Allow us to work with you on a design that can deliver a high-performing, secure IT environment that cost-effectively provides all the benefits of mobility to your employees and customers. Make the call today, and see the difference that empowered employees and enthusiastic customers can make for you.

In my last post I wrote about the importance of understanding your current environment before setting out on a search for new data storage solutions. Understanding your Usable Capacity requirements, Data Characteristics, and Workload Profiles is essential when evaluating the many storage options available. Once you have assessed and documented your requirements, you should spend some time understanding the many technologies being offered with today’s shared storage solutions.
Currently, one of the most talked about shared storage technologies is Flash storage. While Flash technology isn’t new, it is more prevalent now than ever before in shared storage solutions. To help you determine whether or not Flash storage is relevant for your environment, I wanted to touch on answers to some of the basic questions regarding this technology. What is Flash storage? What problem does it solve? What considerations are unique when considering shared Flash storage?
In simple terms, Flash Storage is an implementation of non-volatile memory used in Solid State Drives (SSD) or incorporated on to a PCIe card. Both of these implementations are designed as data storage alternatives to Hard Disk Drives (HDD/”spinning disk”). In most shared storage implementations, you’ll see SSD; and that’s what we’ll talk about today.
As you begin looking at Flash storage options you’ll see them defined by one of the following technologies:
SLC – Single Level Cell
MLC – Multi-level Cell

  • eMLC – Enterprise Multi-level Cell
  • cMLC – Consumer Multi-level Cell

There is a lot of information available on the internet to describe each of the SSD technologies in detail; so, for the purpose of this post, I’ll simply say that SLC is the most expensive of these while cMLC is the least expensive. The cost delta between the SSD technologies can be attributed to reliability and longevity. Given this statement, you might be inclined to disregard any of the MLC solutions for your business critical environment and stick with the solutions that use only SLC. In the past this may have been the right choice; however, the widespread implementation of Flash storage in recent years has brought about significantly improved reliability of MLC. Consequently, you’ll see eMLC and cMLC implemented in many of the Flash storage solutions available today.
Beyond the cell technology, there are three primary implementations of SSD used by storage manufacturers for their array solutions. Those implementations are:

  • All Flash – As you might have guessed, this implementation uses only SSD, without the possibility of including an HDD tier.
  • Flash + HDD – These solutions use a tier of SSD and usually provide a capacity tier made up of Nearline HDD. These solutions often provide automated tiering to migrate data between the two storage tiers.
  • Hybrid – These solutions offer the choice of including SSD along with HDD and can also offer the choice of whether or not to implement automated tiering.

Flash Storage ZunesisSo why consider SSD at all for your shared storage array? Because SSD has no moving parts, replacing HDD with SSD can result in a reduction of power and cooling requirements, especially for shared storage arrays where there can be a large number of drives. However, the most touted advantage of SSD over HDD is speed. SSD is considered when HDD isn’t able to provide an adequate level of performance for certain applications. There are many variables that impact the actual performance gain of SSD over HDD, but it isn’t unrealistic to expect anywhere from 15 to 50 times the performance. So, as you look at the solution options available for storage arrays that incorporate SSD, keep in mind that your primary reason for utilizing Flash is to achieve better performance of one or more workloads.
Historically, we have tried to meet performance demands of high I/O workloads by using large numbers of HDD; the more spinning disks you have reading and writing data, the better your response time will be. However, to achieve adequate performance in this way, we often ended up with far more capacity than required. When SSD first started showing up for enterprise storage solutions, we had the means to meet performance requirements with fewer drives. However, the drives were so small (50GB, 100GB) that we needed to be very miserly with what data was placed on the SSD tier.
Today you’ll find a fairly wide range of capacity options, anywhere from 200GB to 1.92TB per SSD. Consequently, you won’t be challenged trying to meet the capacity requirements of your environment. Given this reality you may be tempted to simply default to an All Flash solution. But, because SSD solutions are still much more expensive than HDD, you want to make sure to match your unique workload requirements accordingly. For instance, it may not make sense for you to pay the SSD premium to support your user file shares; but you might want to consider SSD for certain database requirements or for VDI. This is where you’ll be thankful that you took the time to understand your capacity and workload requirements.
When trying to achieve better performance of your applications, don’t let the choice of SSD be your only consideration. Remember, resolving a bottleneck in one part of the I/O path may simply move the bottleneck somewhere else. Be sure you understand the limitations of the controllers, fibre channel switches, network switches, and HBA’s.
Finally, you’ll need to understand how manufacturers can differentiate their implementation of Flash technology. Do they employ Flash optimization? Is Deduplication, compaction, or thin provisioning part of the design? Manufacturers may use the same terminology to describe these features, but their implementation of the technology may be very different. I’ll cover some of these in my next blog post. In the meantime, you may want to review the 2015 DCIG Flash Memory Buyers Guide.

Strategic technology trends are defined as having potentially significant impact on organizations in the next three years. Here is a summary of a few trends according to Forbes; Gartner, Inc.; Computerworld; and other technology visionaries:

  1. Wearable Devices – Uses of wearable technology are influencing the fields of health and medicine, fitness, aging, education, gaming, and finance. Such devices include bracelets, smart watches, and Google glasses. Wearable technology markets are anticipated to exceed $6 billion by 2016.

  3. Cloud Computing – Gartner says cloud computing will become the bulk of new IT spend by 2016. Business drivers behind cloud initiatives include disaster recovery or backup, increased IT cost, new users or services, and increased IT complexity.

  5. Smart Machines – Smart machines include robots, self-driving cars, and other inventions that are able to make decisions and solve problems without human intervention. Forbes states that 60% of CEOs believe that smart machines are a “futurist fantasy” but will, nonetheless, have a meaningful impact on business.

  7. 3D Printing – 3D printing offers the ability to create solid physical models. The cost of 3D printing will decrease in the next three years, leading to rapid growth of the market for these machines. Industrial use will also continue its rapid expansion. Gartner highlights that expansion will be especially great in industrial, biomedical, and consumer applications, highlighting the extent to which this trend is real, proving that 3D printing is a viable and cost-effective way to reduce costs through improved designs, streamlined prototyping, and short-run manufacturing. Worldwide shipments of 3D printers are expected to double.

  9. New Wi-Fi Standards – Prepare for the next generation of Wi-Fi. First, the Emerging Technology Trends Zunesisemergence of the next wave pf 802.11ac and the second development of the 802.11ax standard. Wi-Fi hotspots are expected to be faster and more reliable. Wi-Fi alliance predicts that products based on a draft of the standard will likely reach markets by early 2016.

  11. Mind-Reading Machines – IBM predicts that by 2016 consumers will be able to control electronics by using brain power only. People will not need passwords. By 2016, consumers will have access to gadgets that read their minds, allowing them to call friends and move computer cursors.

  13. Mobile Devices – Mobile device sales will continue to soar, and we will see less of the standard desktop computer. Worldwide mobile device shipments are expected to reach 2.6 billion units by 2016. Tablet PCs will be the fastest growing category with a 35% growth rate followed by smartphones at 18%.

  15. Big DataBig data refers to the exponential growth and availability of data, both structured and unstructured. The vision is that organizations will be able to take data from any source, harness relevant data, and analyze it to reduce cost, reduce time, develop new products, and make smarter business decisions.

Only time will tell which of these will materialize as well as to what extent. However, one thing is certain: technology is getting faster, smarter, and more mobile by the minute. Interacting with technology any place and any time has become the norm, and this trend will continue to have a greater and greater impact on all types of organizations. Look up: George Jetson might be your next employee.


Anyone who has ever worked with Microsoft’s Active Directory, either as an end user or administrator, has undoubtedly come across strangeness and unexplained occurrences.  Active Directory serves many purposes: identity management, resource policy deployment, and user security management to name a few.  Active Directory handles its extremely complex inter-workings in a very robust and flexible way.  It is designed to resist outages and lost communication while continuing to provide services to users.  While all of that is good from an availability standpoint, it also makes it easy to hide problems from its administrators.


Help Desk conversations about Active Directory can often be heard with the phrases, “I don’t know why that happened,” “That’s weird. I’ve never seen it do that before,” and “Oh well, it works now.” These conversations can lead to the realization that Active Directory isn’t totally healthy and could be performing better than it is currently.  Something as simple as logging on to a workstation may generate multiple errors that aren’t visible to the end user except in the symptom of a log on delay.
The health of Active Directory can be affected in many ways. Changes to Active Directory throughout the years can add up to significant problems that seem to show up suddenly.  Examples of these types of changes could be any of the following:

  • Adding or removing domain controllers
  • Upgrading domain controllers
  • Adding or removing Exchange servers
  • Adding or removing physical sites to your environment
  • Extending the schema
  • Unreliable communication between domain controllers

These changes, if done incorrectly, can cause multiple problems including log on issues, Active Directory Zunesisreplication failures, DNS misconfiguration, or GPO problems to name a few.
Simple questions that you can ask yourself to determine if your Active Directory is currently not as healthy as it could be are as follows:

  • Do your users complain of strange log on or authentication issues?
  • Does it take an abnormally long time for users to log on to their workstations?
  • Do your GPOs work sometimes and not other times?
  • Do you get strange references to old domain controllers or Exchange servers that have long since been removed?
  • Do you have issues resolving server’s names through DNS?
  • Do your DNS servers get out of sync?
  • Do DNS entries mysteriously disappear?
  • And maybe most importantly, have you ever employed an admin that was given full rights to Active Directory who you later learned was not qualified?

Active Directory is integral to the IT success of just about every company.  Finding issues and correcting them before they become a problem can prevent outages and future losses in revenue.  Whether you are currently experiencing noticeable issues or just want a “feel good” report on the current status of your Active Directory, Zunesis can provide that peace of mind.  With over 15 years supporting Microsoft Active Directory services for our customers, we have the experience and skills to get your Active Directory to a healthy state.  Our proven method of using various tools to extract Active Directory information, analyze that data, and prepare and deliver a detailed report has proven very successful.  Contact Zunesis today to set up an appointment to talk about your Active Directory needs.


EMAIL: info@zunesis.com




Zunesis, Inc.
12303 Airport Way, Suite 100,
Broomfield, CO 80021
(720) 221-5200

Las Vegas
6671 Las Vegas Blvd S
Building D Suite 210, Office 260
Las Vegas, NV 89119
(702) 837-5300

Copyright © 2023 Zunesis. All Rights Reserved. | Website Developed & Managed by C. CREATIVE, LLC