test

A flood that damages mission-critical equipment is among the worst scenarios a data center can face, offering little hope for a quick fix. On April 13, 2015, the Zunesis team received a “first responders call” from a long-term public sector customer who had the misfortune to experience a broken water main that left their main data center under 2 feet of standing water.

The call for help came at 4 p.m. on a Thursday afternoon. Zunesis joined forces with HP and immediately responded to assess the damage. By the next morning, it was clear that the data center was living on borrowed time. With 18 racks of equipment compromised, it was imperative that the team immediately instigate a full Data center fail-over. Luckily, a secondary data center had just been completed, along with plans started for a full DR site; but those plans were stalled as the organization waited for additional funding to become available.

A two-fold plan was quickly put into place to swiftly expand the new data center to capacity while keeping the damaged data center up and running. The insurance company was immediately engaged to begin the replacement exercise while Zunesis and HP jumped in to “keep the lights on” in the existing data center.  Immediately needed were a large supply of extra drives; power supplies; cables; switches; and, most daunting, an emergency 120TB SAN array to move data to safety.

Zunesis Datacenter Disaster RecoveryBy Friday morning, emails had been circulated to the highest levels of HPE and Zunesis requesting emergency assistance. Zunesis quickly moved a 3Par 7200 array from its own internal lab to the customer site and expanded an additional 100TB in an emergency drive order to migrate data.   HP called in emergency supplies from all over the Americas. Extra drives, power supplies, cables, switches, and even servers arrived throughout the weekend. Within 48 hours, the teams had built up enough reserves to keep the data center live until all of the data could be migrated.
Although all IT organizations know that it’s critical to have a solid DR plan and failover Data center in place, the reality is that the Data center spotlight is on monitoring and compliance; and the loss of a main data center is crippling. The second most common cause of catastrophic failures (after electrical) in the Data center is water leaks. Taking the following three steps can help avoid a flood disaster:

 

  • Initial survey to ensure that the location of our data center is not in a flood plain and that the site is well protected from external water sources. Confirm that the fabric of the building is well enough designed and properly maintained to prevent the ingress of rain (even in extreme storm conditions). Check how sources of water inside the building are routed (hot and cold water storage tanks, pipe runs, waste pipes, WCs, as well as water-based fire suppression systems in the office space). Office space above a data center is almost always dangerous from a water ingress perspective.
  • Protection to ensure that any water entering the data center does not have the opportunity to build up and cause a problem. If you are really worried, install drains and a sump pump under the plenum floor. Ensure that the floor space is sealed and that all cable routes through partition walls are stopped up to be air and water tight.
  • Monitoring is critical in a data center – we absolutely need to be able to detect water under the plenum floor. Generally, water detection systems use a cable that runs under the floor and causes an alarm to be triggered if it comes into contact with water.

Click to find out more about our Disaster Recovery Assessments.

I talked about the critical importance of turning the 70/30 rule on its head in my last post, that the winners in your competitive set are the ones who are able to spend less time, money, and human resources maintaining their current IT environment and more of their resources using IT to create a competitive advantage. Companies that use IT to help out-invent, out-innovate, and be more customer-focused than their competitors will find themselves top of mind and top of heart for their customers.
One of the areas that is gathering great momentum in the IT industry is mobility. More and more businesses are realizing the benefits of employees and customers being able to access their own information from any device and from anywhere.
There are many benefits to mobility that are influencing this trend:

  • Portability: A number of recent studies point to the fact that extending access to critical work applications to all employees leads to greater employee satisfaction and a general sense of enablement and empowerment.
  • Availability: The ability to access content from anywhere leads to improved productivity and efficiency. It also leads to much greater responsiveness. From a revenue perspective, opportunities for customers to buy at any hour from any device can be a game changer for many businesses. On more than one occasion, access to my Kindle account on those 3AM over-caffeinated nights has certainly been good news for any Amazon stock holder.
  • Power Savings: While the momentum is building for the truly mobile workforce, initial estimates point to a potential for up to 44% power savings through virtualization and BYOD initiatives
  • Personal Ownership of Devices: Can be a real difference maker for businesses leading to reduced costs, higher adoption, and better employee engagement/higher employee compliance.
  • Employee Satisfaction/Retention: Flexibility to work when and where they choose is fast becoming a part of the overall compensation package for many workers. Multiple studies point to a significant number of younger workers considering flexible work locations and schedules, social media access, and ability to use their own devices being as critical in an overall compensation package as is pay.
  • Mobility-zunesis

As with most things in life, there is a flip side. Along with the upsides of mobility, there comes a number of crucial challenges:

  • Security
    • At the same time that employees and consumers are asking for easier, quicker access to their work applications and account information, high profile security breaches point out how sophisticated hackers have become at taking advantage of even the tiniest cracks in security.
  • Performance
    • For most employees and customers, the only thing worse than not having access to applications and accounts is having access hampered by a poorly performing application.
  • Cost Effectiveness
      • Ensuring secure data access with multiple devices from multiple locations at all hours of the day (and night) introduces challenges. How do you create a secure, high-performing environment that doesn’t erode both the cost savings associated with employees and customers owning their own devices and the revenue opportunities of allowing customers to interact with you anytime they choose?

     

 Choosing the right partner
How can you dial in on just the right mix of access, security, and cost effectiveness? The answer is working with a partner that can bring to the table the right mix of products, design expertise, and experience. Zunesis is a partner that can work with you to build a solution that spans virtualization, security, server, storage, and networking capabilities. Allow us to work with you on a design that can deliver a high-performing, secure IT environment that cost-effectively provides all the benefits of mobility to your employees and customers. Make the call today, and see the difference that empowered employees and enthusiastic customers can make for you.

In my last post I wrote about the importance of understanding your current environment before setting out on a search for new data storage solutions. Understanding your Usable Capacity requirements, Data Characteristics, and Workload Profiles is essential when evaluating the many storage options available. Once you have assessed and documented your requirements, you should spend some time understanding the many technologies being offered with today’s shared storage solutions.
Currently, one of the most talked about shared storage technologies is Flash storage. While Flash technology isn’t new, it is more prevalent now than ever before in shared storage solutions. To help you determine whether or not Flash storage is relevant for your environment, I wanted to touch on answers to some of the basic questions regarding this technology. What is Flash storage? What problem does it solve? What considerations are unique when considering shared Flash storage?
In simple terms, Flash Storage is an implementation of non-volatile memory used in Solid State Drives (SSD) or incorporated on to a PCIe card. Both of these implementations are designed as data storage alternatives to Hard Disk Drives (HDD/”spinning disk”). In most shared storage implementations, you’ll see SSD; and that’s what we’ll talk about today.
As you begin looking at Flash storage options you’ll see them defined by one of the following technologies:
SLC – Single Level Cell
MLC – Multi-level Cell

  • eMLC – Enterprise Multi-level Cell
  • cMLC – Consumer Multi-level Cell

There is a lot of information available on the internet to describe each of the SSD technologies in detail; so, for the purpose of this post, I’ll simply say that SLC is the most expensive of these while cMLC is the least expensive. The cost delta between the SSD technologies can be attributed to reliability and longevity. Given this statement, you might be inclined to disregard any of the MLC solutions for your business critical environment and stick with the solutions that use only SLC. In the past this may have been the right choice; however, the widespread implementation of Flash storage in recent years has brought about significantly improved reliability of MLC. Consequently, you’ll see eMLC and cMLC implemented in many of the Flash storage solutions available today.
Beyond the cell technology, there are three primary implementations of SSD used by storage manufacturers for their array solutions. Those implementations are:

  • All Flash – As you might have guessed, this implementation uses only SSD, without the possibility of including an HDD tier.
  • Flash + HDD – These solutions use a tier of SSD and usually provide a capacity tier made up of Nearline HDD. These solutions often provide automated tiering to migrate data between the two storage tiers.
  • Hybrid – These solutions offer the choice of including SSD along with HDD and can also offer the choice of whether or not to implement automated tiering.

Flash Storage ZunesisSo why consider SSD at all for your shared storage array? Because SSD has no moving parts, replacing HDD with SSD can result in a reduction of power and cooling requirements, especially for shared storage arrays where there can be a large number of drives. However, the most touted advantage of SSD over HDD is speed. SSD is considered when HDD isn’t able to provide an adequate level of performance for certain applications. There are many variables that impact the actual performance gain of SSD over HDD, but it isn’t unrealistic to expect anywhere from 15 to 50 times the performance. So, as you look at the solution options available for storage arrays that incorporate SSD, keep in mind that your primary reason for utilizing Flash is to achieve better performance of one or more workloads.
Historically, we have tried to meet performance demands of high I/O workloads by using large numbers of HDD; the more spinning disks you have reading and writing data, the better your response time will be. However, to achieve adequate performance in this way, we often ended up with far more capacity than required. When SSD first started showing up for enterprise storage solutions, we had the means to meet performance requirements with fewer drives. However, the drives were so small (50GB, 100GB) that we needed to be very miserly with what data was placed on the SSD tier.
Today you’ll find a fairly wide range of capacity options, anywhere from 200GB to 1.92TB per SSD. Consequently, you won’t be challenged trying to meet the capacity requirements of your environment. Given this reality you may be tempted to simply default to an All Flash solution. But, because SSD solutions are still much more expensive than HDD, you want to make sure to match your unique workload requirements accordingly. For instance, it may not make sense for you to pay the SSD premium to support your user file shares; but you might want to consider SSD for certain database requirements or for VDI. This is where you’ll be thankful that you took the time to understand your capacity and workload requirements.
When trying to achieve better performance of your applications, don’t let the choice of SSD be your only consideration. Remember, resolving a bottleneck in one part of the I/O path may simply move the bottleneck somewhere else. Be sure you understand the limitations of the controllers, fibre channel switches, network switches, and HBA’s.
Finally, you’ll need to understand how manufacturers can differentiate their implementation of Flash technology. Do they employ Flash optimization? Is Deduplication, compaction, or thin provisioning part of the design? Manufacturers may use the same terminology to describe these features, but their implementation of the technology may be very different. I’ll cover some of these in my next blog post. In the meantime, you may want to review the 2015 DCIG Flash Memory Buyers Guide.

Strategic technology trends are defined as having potentially significant impact on organizations in the next three years. Here is a summary of a few trends according to Forbes; Gartner, Inc.; Computerworld; and other technology visionaries:
 

  1. Wearable Devices – Uses of wearable technology are influencing the fields of health and medicine, fitness, aging, education, gaming, and finance. Such devices include bracelets, smart watches, and Google glasses. Wearable technology markets are anticipated to exceed $6 billion by 2016.
  2.  

  3. Cloud Computing – Gartner says cloud computing will become the bulk of new IT spend by 2016. Business drivers behind cloud initiatives include disaster recovery or backup, increased IT cost, new users or services, and increased IT complexity.
  4.  

  5. Smart Machines – Smart machines include robots, self-driving cars, and other inventions that are able to make decisions and solve problems without human intervention. Forbes states that 60% of CEOs believe that smart machines are a “futurist fantasy” but will, nonetheless, have a meaningful impact on business.
  6.  

  7. 3D Printing – 3D printing offers the ability to create solid physical models. The cost of 3D printing will decrease in the next three years, leading to rapid growth of the market for these machines. Industrial use will also continue its rapid expansion. Gartner highlights that expansion will be especially great in industrial, biomedical, and consumer applications, highlighting the extent to which this trend is real, proving that 3D printing is a viable and cost-effective way to reduce costs through improved designs, streamlined prototyping, and short-run manufacturing. Worldwide shipments of 3D printers are expected to double.
  8.  

  9. New Wi-Fi Standards – Prepare for the next generation of Wi-Fi. First, the Emerging Technology Trends Zunesisemergence of the next wave pf 802.11ac and the second development of the 802.11ax standard. Wi-Fi hotspots are expected to be faster and more reliable. Wi-Fi alliance predicts that products based on a draft of the standard will likely reach markets by early 2016.
  10.  

  11. Mind-Reading Machines – IBM predicts that by 2016 consumers will be able to control electronics by using brain power only. People will not need passwords. By 2016, consumers will have access to gadgets that read their minds, allowing them to call friends and move computer cursors.
  12.  

  13. Mobile Devices – Mobile device sales will continue to soar, and we will see less of the standard desktop computer. Worldwide mobile device shipments are expected to reach 2.6 billion units by 2016. Tablet PCs will be the fastest growing category with a 35% growth rate followed by smartphones at 18%.
  14.  

  15. Big DataBig data refers to the exponential growth and availability of data, both structured and unstructured. The vision is that organizations will be able to take data from any source, harness relevant data, and analyze it to reduce cost, reduce time, develop new products, and make smarter business decisions.

 
Only time will tell which of these will materialize as well as to what extent. However, one thing is certain: technology is getting faster, smarter, and more mobile by the minute. Interacting with technology any place and any time has become the norm, and this trend will continue to have a greater and greater impact on all types of organizations. Look up: George Jetson might be your next employee.

 

Anyone who has ever worked with Microsoft’s Active Directory, either as an end user or administrator, has undoubtedly come across strangeness and unexplained occurrences.  Active Directory serves many purposes: identity management, resource policy deployment, and user security management to name a few.  Active Directory handles its extremely complex inter-workings in a very robust and flexible way.  It is designed to resist outages and lost communication while continuing to provide services to users.  While all of that is good from an availability standpoint, it also makes it easy to hide problems from its administrators.

 

Help Desk conversations about Active Directory can often be heard with the phrases, “I don’t know why that happened,” “That’s weird. I’ve never seen it do that before,” and “Oh well, it works now.” These conversations can lead to the realization that Active Directory isn’t totally healthy and could be performing better than it is currently.  Something as simple as logging on to a workstation may generate multiple errors that aren’t visible to the end user except in the symptom of a log on delay.
The health of Active Directory can be affected in many ways. Changes to Active Directory throughout the years can add up to significant problems that seem to show up suddenly.  Examples of these types of changes could be any of the following:

  • Adding or removing domain controllers
  • Upgrading domain controllers
  • Adding or removing Exchange servers
  • Adding or removing physical sites to your environment
  • Extending the schema
  • Unreliable communication between domain controllers

These changes, if done incorrectly, can cause multiple problems including log on issues, Active Directory Zunesisreplication failures, DNS misconfiguration, or GPO problems to name a few.
Simple questions that you can ask yourself to determine if your Active Directory is currently not as healthy as it could be are as follows:

  • Do your users complain of strange log on or authentication issues?
  • Does it take an abnormally long time for users to log on to their workstations?
  • Do your GPOs work sometimes and not other times?
  • Do you get strange references to old domain controllers or Exchange servers that have long since been removed?
  • Do you have issues resolving server’s names through DNS?
  • Do your DNS servers get out of sync?
  • Do DNS entries mysteriously disappear?
  • And maybe most importantly, have you ever employed an admin that was given full rights to Active Directory who you later learned was not qualified?

Active Directory is integral to the IT success of just about every company.  Finding issues and correcting them before they become a problem can prevent outages and future losses in revenue.  Whether you are currently experiencing noticeable issues or just want a “feel good” report on the current status of your Active Directory, Zunesis can provide that peace of mind.  With over 15 years supporting Microsoft Active Directory services for our customers, we have the experience and skills to get your Active Directory to a healthy state.  Our proven method of using various tools to extract Active Directory information, analyze that data, and prepare and deliver a detailed report has proven very successful.  Contact Zunesis today to set up an appointment to talk about your Active Directory needs.

The ability to form closer customer relationships, stay at the forefront of market trends, and create competitive differentiation comes at a critical time for marketing organizations across all industries. In today’s competitive world, there are too many companies competing in an environment where there are not enough customers. This is especially true for high-tech and information technology companies where technology advancement occurs at a rapid pace. Now, more than ever, marketing organizations must create clear differentiation by tapping into new streams of customer, market, and competitive data made available from external sources like the public web. We’ve proven that when this is done with some discipline, exceptional results can be achieved.
We’ve seen how a large marketing organization’s ability to discover, synthesize, and act upon public web data can have a direct and tangible impact on their marketing strategy. I’ve had the good fortune to work with HP and other large brands in my career, and they all grapple with the same issue: they are drowning in a sea of data, all made possible by the public web and social media. I’ve realized that in order to see tangible results from social media listening (read: glean actionable insight), technology alone only gets you so far. When combining technology with human analysis, however, quantifiable value can be realized.
The problem with using social media listening technology as a standalone solution is its inability to automatically provide the in-depth analysis that a human is capable of providing. For example, we worked with a major sports team who wanted to increase social media engagement amongst women. A listening platform may provide data about what content women are engaging online, but it will not offer the strategy needed to glean more engagement. Human analysis needs to be conducted in order to understand and digest certain demographic data trends, interpret the underlying reason for engagement, and develop specific social media strategies that will better target and engage female audiences. In other words, the platform only gets you so far. The interpretation and analysis that takes a market research approach to understanding the data combined with the tool (technology) is necessary to glean insight.
We often forget that business, like life, is merely a series of decisions. The success of a company can be directly related to the worth of the decisions made by its employees; this is particularly true for marketing teams. By combining social media listening technology with human analysis, it is possible to take a complex data set now available online and categorize and synthesize it to identify key themes, influential voices, and conversation share. Companies now have access to a vast data set provided by the public web. The opportunity, then, is to convert this data into actionable insight that is directly linked to marketing processes to drive business outcomes.

This is a blog about a journey: a journey from being the customer of an IT Solutions Provider to servicing the customer. A journey about taking my perceived thoughts and ideas about the way I should have been treated as a customer and turning them into an action plan or template for the way I now treat my customers.
Background:
I am a college graduate with a Bachelor’s degree in Computer Science. In my 20 years in the IT industry, I have had a number of different titles – End User Support Specialist, Network Administrator, Sr. Systems Engineer, IT Manager – all with 3 companies. I did a 5 year stint with Company A in the Legal Profession in Ohio, and I served 15 years with Company B in the Financial Sector in Las Vegas.
Over the years, I have always wondered what it would be like to work for a vendor. I was curious about the ability to engage a wider range of technology, more in depth than the “in-house” IT professional usually experiences. In fact, I had multiple offers over the years; but Company B was a great place to work, so I never left.
During the early parts of 2014, I felt things were getting stale with Company B; so I got serious about a career change. I had a great relationship with Zunesis as a customer, so I explored employment opportunities with them. There was an immediate need and fit. In a few short weeks, I signed on with Zunesis; and after a nice week and a half vacation in California with my family, I started a new career.
Zunesis IT Solutions ProviderI had been the main contact for IT vendors for Company B. I have experienced all sorts of sales people:

  • The quiet type who let their products do the talking,
  • The boisterous type who think wining and dining is the way to close a sale (which I never complained about),
  • The “inside” sales guy who is constantly calling just because I had asked for a quote,
  • The “BFF,” that is, as long as there is a pending sale,
  • And, finally, the good ones who handle the relationship the way you ask them to and put the “Customer First.”

I have also experienced all types of engineers:

  • The “Of course I will come to Vegas and help you, stay out all night playing blackjack, and fall asleep in the conference room the next day,” type (not kidding),
  • The “I know everything about everything,” type,
  • And the good ones who are thorough, concise, and do what they say they are going to do. Again, the ones who put the “Customer First.”

As a customer I have experienced several types of sales people and engineers; this experience will help shape the type of Solution Architect I’d like to be as well as the type of sales people with whom I will align. I also bring to Zunesis years of experience as a customer, someone who has walked miles in the customer’s shoes. I bring a refreshing perspective to Zunesis’ motto: “Customer First.”
Stay tuned for my next entry as I describe my experiences in my first six months at Zunesis.

The vendor world is not particularly gifted at letting the rest of the world know when they have specials taking place. Today I would like to rectify that situation and focus on a fantastic deal that HPE has for their ProLiant servers, an additional value-add included with each Intel Xeon server purchased.
HPE, for no additional cost, is giving clients a 1TB license of HPE StoreVirtual VSA (Virtual SAN Appliance) software with all Intel Xeon-based HP ProLiant DL, BL, and ML series servers. Essentially, VSA is a virtual machine which can utilize “captive” internal unused disk installed in a server. This storage with VSA software can appear as an iSCSI SAN which can then be used by other servers or virtual machines.
The HPE StoreVirtual VSA technology is based on LeftHand OS (a well-known and a very mature storage operating environment). If two or more VSAs are installed (on different servers), a fault-tolerant storage environment can be created, as the Lefthand OS software is fully featured and includes such technologies as snapshots, replication, thin provisioning, etc.
HPE StoreVirtual VSA is just a part of HPE’s Software-Defined Storage initiatives, including VSA-powered deduplication appliances (StoreOnce VSA), and hooks into the industry’s leading virtual machine backup tool, Veeam.
HP ProLiant with Zunesis
Configured in just minutes, HPE ProLiant Gen9 servers, which now feature an integrated single-click VSA deployment, can easily be up and running with an operating system and VSA software to support virtualized applications and shared storage on the same hardware.
Unfortunately, if your server was purchased prior to this promotion, or there is no “coupon code sticker” on your server, but you can trial this storage software for 60 days to see if it would fit your needs.
If you like what you see with this promo or need more capacity capability, even in your older/existing servers, please contact Zunesis for a quote or for more information.
See what you gain with HPE StoreVirtual VSA!

[qodef_button size=”small” type=”default” text=”More Information” custom_class=”” icon_pack=”font_awesome” fa_icon=”” link=”http://www8.hp.com/us/en/products/data-storage/server-vsa.html” target=”_blank” color=”#ffffff” hover_color=”#004f95″ background_color=”#30c7ff” hover_background_color=”#ffffff” border_color=”#30c7ff” hover_border_color=”#004f95″ font_size=”” font_weight=”” margin=””]

[qodef_button size=”small” type=”default” text=”Product Datasheet” custom_class=”” icon_pack=”font_awesome” fa_icon=”” link=”http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA5-5143ENW&cc=us&lc=en” target=”_blank” color=”#ffffff” hover_color=”#004f95″ background_color=”#30c7ff” hover_background_color=”#ffffff” border_color=”#30c7ff” hover_border_color=”#004f95″ font_size=”” font_weight=”” margin=””]

In my last blog, we explored the Customer Service Pillar of being Responsive in the context of what customers are expecting from the IT Solution providers today.  As a reminder, I am writing from a customer service roadmap called CustomerFIRST where the word FIRST is an acrostic.
Customer First ZunesisThis week I am addressing the 4th Pillar of successful customer service – being Strategic.
When I was a young IT sales professional in my 20’s, I learned a very important lesson from a wise Chief Information Officer (CIO) who decided to invest in my development. The story I am about to tell is true, but the names have been omitted to protect the innocent. After 2 years of hard work, we had established our company as a strategic provider of IT Solutions for one of the largest railroads in America. I had developed a strong trust-based relationship with the CIO and she was an outspoken positive reference for our work.
I took my hard fought experiences and reached out to the CIO of another large railroad.  After an exhaustive “never give up” cold calling campaign, I wore the poor CIO down and he agreed to meet with me. As my airplane touched down in the city where this railroad was headquartered, I could feel my excitement building. You see, I had so much to share with this CIO. I arrived 30 minutes early wearing a dark suit and my power tie. As I waited outside his office, I reviewed in my mind what I had rehearsed and prepared. Promptly at 11 a.m., I was escorted in to a large office where Mr. CIO sat behind his desk looking very professional and somewhat intimidating.
We exchanged some pleasantries about the weather and a recent football game, and then I began my pitch. My confidence level was at an all-time high, and the words flowed from my mouth like a beautiful river. The CIO seemed to be listening to everything I said; and he would occasionally nod and say, “You don’t say.” I ended my discussion with my closing statement, “As you can see, we have proven technical expertise in all of these important areas; and we could do the same magic for you!”
For the first time in 30 minutes, I was silent. The CIO looked at me and said “Steve, I really appreciate you taking the time to come all the way out here to meet with me and tell me about all the wonderful things you are doing for another railroad. To be candid, we don’t have a need for any of the technologies or solutions you just presented.” With that, he walked me to the door and said, “My assistant will show you out.” I was dumbfounded and in a state of shock. My delivery was perfect and my discussions focused on the railroad industry. What the heck?
Two days later, back in Colorado, I picked up the phone and called Mr. Railroad CIO who graciously agreed to talk with me. I simply wanted to know what happened. Mr. Railroad CIO shared with me that he purposely showed me the door to teach me a lesson that he hoped would benefit me throughout my future life in business. He explained that my focus should always be on first seeking to understand the needs, desires and priorities of the client. He said, “It doesn’t matter what you have done, how you have done it or what you have accomplished, if it isn’t relevant to the needs of the client sitting in front of you.” He followed this with, “Never sell before you understand.” That experience has stuck with me since, and I thank Mr. Railroad CIO for his willingness to teach me an important lesson.
IT Solutions Provider ZunesisBeing strategic means being relevant; and being relevant can only happen if a customer’s needs, requirements, and business are clearly understood. Today, clients expect that their IT Solution Providers are investing in the process of learning their business, their culture, and their operations. Without this investment in knowledge, IT Solution Providers are throwing things against the proverbial wall and hoping something sticks. Too often, IT Solution Providers wait for their clients to request technology – hardware, software, solutions – and then they react and respond to those requests. This is a reactive model of service, and most IT Solution Providers fall into this bucket.
While being responsive is important, as I discussed in my last blog, being proactive can lead to true customer service and loyalty. Imagine a conversation with a client that goes something like this:
“Mrs. Client, based on your key initiatives for the coming year and your focus on getting your IT staff up to speed on technology X, I thought you would benefit from this written case study of an implementation of this solution we recently did for another client. Let me know if you would like to talk about how we could help you in the same way.”
This proactive type of value can create huge separation between yourself and your competition and the sales process advances much more smoothly. All of this sounds like common sense, doesn’t it? Then why in the world are we, as an industry, not doing it more consistently?
Perhaps because it takes time. Being Strategic requires an investment of time and effort to learn about your customer in many areas:

  • Current IT infrastructure, operations, and applications
  • Key initiatives and priorities
  • IT vision & strategy looking forward (future)
  • Procurement, budgeting, approval process
  • Current IT staff (expertise and experience)
  • Culture

If an IT Solution Provider can share ideas, solutions, and technologies based on the actual needs and operational realities of a client, those ideas and solutions have a much greater chance of adding value to the client.Don’t make the mistake I made by “showing up and throwing up” without first doing your homework and investing in learning what really matters to the client. Your investment will separate you for the competition and create an opportunity to add relevant value to your client.
Next week we will explore the importance of being Trusted by our clients and the process to create the opportunity for a trust-based relationship.
Until we meet again, I wish you the very best in your efforts to serve customers in the ways they wish to be served.

As Information Technology (IT) professionals, we develop tunnel vision from time to time. Disaster Recovery (DR) planning is an area that tends to be a focal point of the aforementioned tunnel vision. IT professionals show a propensity to zero in on technology DR planning, working diligently to ensure primary data center services are recoverable and functioning in a timely manner after a disaster.
In doing so, they often forget that technology DR planning is just a piece of the larger Business Continuity (BC) planning. BC planning is the process of preparing to mitigate the damage caused by any disruption of normal business operations and ensure the return to normal functionality in as expedient and efficient a manner as possible.
VMware Site Recovery Manager (SRM) is a tool that can greatly assist with both types of planning through the automation of technology within VMware virtualized infrastructures. There are three very important things to know about VMware SRM.

    1. First, it is a tool to assist with the disaster recovery process. It is not a complete DR plan or process in and of itself.
        • SRM can allow for the automated failover of virtual machines (VM’s) from the protected site to the recovery site.
        • It provides the capability to automate the reconfiguration of many items for the protected VM’s, thus changing the IP configuration for the VM’s to be compatible with the recovery site’s IP address schema.
        • It does not have the capability to change items eternal to virtual infrastructure, such as Internet domain name services and public IP addresses. When a failover to the recovery site does occur, the public IP address for that site will differ from the one used at the protected site. The tools and process used to facilitate this change are just one example of items that must be included within a complete DR plan – though VMware SRM is a useful tool; it cannot be used as an all-in-one strategy for DR.DR Plan Zunesis

 

    1. The second thing to know about VMware SRM is that it does not require the use of storage vendor array based replication (ABR).
        • Vendor ABR solutions can provide a smaller recovery point object (RPO), a fancy way of saying that less operational data is lost, than VMware vSphere Replication (VR). Some vendor ABR’s can provide almost real-time RPO through synchronous replication.
        • VMware VR, which is included with vSphere standard edition and up, can provide an RPO in windows as small as 15 minutes. For many organizations, a 15-minute RPO is acceptable, especially when balanced against the additional cost of vendor ABR solutions.

 

  1. Finally, the third significant thing to recognize about VMware SRM is that it can be utilized as more than a DR tool.
    • SRM provides the capability of not only seamlessly testing failover of VM’s from the protected site to the recovery site and back again in isolation, but it allows testers to do so without affecting the production use of said VM’s.
    • SRM can fully migrate production VM workloads between vCenter cluster and a physically distinct data center without disruption to services. This offers IT organizations great flexibility in providing continuous services through hardware and software upgrades for the virtual infrastructure, as well as moves between facilities.

VMware SRM can be a valuable tool used for the technology disaster recovery plan, which itself is only part of the organization-encompassing business continuity plan. Depending on the virtual environment of an organization, SRM can provide an excellent way to stay within budget without giving up what matters from a data recovery point of view. Though it comes with certain limitations and should not be expected to perform all functions of a good DR plan, it should certainly be considered as a piece of the puzzle as DR strategies are developed.

GET IN TOUCH

EMAIL: info@zunesis.com

     

    

CORPORATE OFFICE

Zunesis, Inc.
12303 Airport Way, Suite 100,
Broomfield, CO 80021
(720) 221-5200

Las Vegas
6671 Las Vegas Blvd S
Building D Suite 210, Office 260
Las Vegas, NV 89119
(702) 837-5300

Copyright © 2023 Zunesis. All Rights Reserved. | Website Developed & Managed by C. CREATIVE, LLC