In the last three years, 96% of global IT decision-makers have experienced at least one outage. The average downtime following a ransomware attack is three weeks. And according to ITIC’s 2021 Hourly Cost of Downtime survey, 91% percent of mid-sized and large enterprises say just one hour of server downtime would cost them $300,000 or more—half of whom believe it would exceed $1M.
Data is the lifeblood of any business. Without a reliable and robust disaster recovery plan in place, any unexpected disruption—whether from hardware failure, natural disaster, or a cyberattack—can result in data loss, prolonged downtime, crippling financial losses, and reputational damage.
Disaster Recovery as a Service (DRaaS) has emerged as one of the most effective and efficient approaches to disaster recovery in recent years. With the ability to back up all cloud data and applications in a managed data center, the pay-as-you-go cloud service model not only safeguards critical assets, but also ensures rapid restoration, minimizing the impact of disruptions on business operations. In essence, DRaaS simplifies disaster recovery, keeping your business resilient and operational in the face of adversity.
Disaster recovery is a critical component of your business’s IT strategy. With a DRaaS approach that is well managed, you can solidify a resilient footing against disruptions, safeguard your critical assets, and ensure the continuity of your business operations in the face of what could otherwise be catastrophically damaging events.
At Zunesis, we can help you adopt a comprehensive DRaaS approach that protects your critical data and applications. With advanced expertise in HPE’s backup, recovery, and ransomware protection capabilities, we’ll partner with you to ensure your business stays resilient and operational, even in the face of unexpected events.
For more information, contact us here.
Downtime costs businesses an average of $84,650 per hour. A natural disaster or cyber-attack can result in weeks of downtime for a business that’s not prepared, delivering a massive financial blow. Even worse, according to the Federal Emergency Management Agency, 40% of small and mid-sized businesses never reopen after a natural disaster, and an additional 25% reopen but fail within a year. These statistics are staggering—and sadly, we’ve seen scenarios like these play out many times with our clients.
The threat of man-made and natural catastrophes is real—and in most cases, it’s something you can’t control. What you can control, however, are the safeguards you have in place to help your business recover when disaster strikes.
DRaaS is a pay-as-you go cloud service model that delivers backup services in a managed data center to ensure access and functionality to IT infrastructure after a disaster. It gives an organization a total system backup for rapid restoration of data servers and applications in the event of system failure. By replicating and backing up all cloud data and applications, DRaaS protects data, limits downtime, and shortens Recovery Point Objectives (RPOs) when a disaster happens.
At Zunesis, we can help you achieve modern data protection with HPE GreenLake for data protection. Ask us about how your company can install a free trial of HPE Greenlake backup and recovery.
For more information about HPE’s industry-leading backup, recovery, and ransomware protection capabilities, contact us today.
Keeping on topic with how our country is fairing, I thought I would touch on one of the many elephants in the room that often goes unnoticed or is ‘conveniently’ forgotten about. That elephant is named Disaster Recovery, and an organization might be called a ‘Dumbo’ if they don’t have a Disaster Recovery Plan in place.
First, what constitutes a disaster in the IT world? A disaster in the IT world is defined by many different categories:
All of the above seem to be happening on a daily basis. The question is no longer if this is going to happen to an organization, it’s WHEN is it going to happen to an organization? There’s plenty of natural disasters all year long, and they seem to continue to increase in severity.
Thieves are getting more and more creative and are constantly thinking of innovative ways of hacking into even some of the most fortified infrastructures. Just ask Epsilon, Facebook, Sony, Yahoo, etc. Ransomware is nerve-racking to say the least. An organization could shell out hundreds, if not millions of dollars on the hope of possibly getting data back. Talk about a gamble! You’d have better luck at the Blackjack table. Power Outages seem to be more frequent now than ever. Rolling blackouts seems to be the trend in many states, and it’s only mid-June!
And, then there is the human race and our ability to be create a cataclysmic mistake because he or she might have been having a bad day. We’ve all been there, and we all know that sometimes the slightest little thing will set us off and put a constraint on our ability to focus on the smallest of details. Even worse, there are many times where an organization knows they need to do something as soon as possible, but instead they try to wish it away or the problem will go away by itself. Disasters are like a Cancer; they aren’t going away unless treated.
These stats from Markel Insurance Company are shocking and should open some eyes.
Now, I am not here to sell insurance to you, but to create more awareness out there that if an organization doesn’t have a plan in place, many will be filing unemployment WHEN that day happens.
Before that plan is created, it may be a good idea to have an overall Disaster Recovery Assessment which will look at a company’s Server, Storage and Network Infrastructure. What usually happens is that a specialized Engineer will be onsite and also remote-in to a specific infrastructure and document the existing environment. From that environment, an Engineer will provide recommendations based on best practices and a ballpark figure of what it’s going to cost to make sure that the Disaster Recovery Plan will minimize the pain from the actual disaster as much as possible.
There are some organizations out there whose pride is bigger than their brains. Don’t let ego get in the way and be a know-it-all. The reality is that organizations don’t know-it-all. Employees have been doing the work of two or three people since the early 2000s. This Disaster Recovery Plan should not be added to their list of daily duties. Swallow the pride and get a team out there who specializes in Disaster Recovery. They know what to look for, ask the right questions, uncover a weakness or threat that has gone unnoticed and may even reduce the risk of a disaster that was right around the corner.
I am not an alarmist, but a realist. It’s going to happen to every organization. How prepared are you if it happens tomorrow? Good luck sleeping with that floating around in your head tonight. Contact Zunesis for an assessment of your current environment and for recommendations on Disaster Recovery solutions.
When you want to take on new technology these days, the options seem endless. The As A Service model is the leading trend in modernizing your IT environment. Why not pay for what you use each month rather than investing in an expensive piece of hardware. Capex has gone. Opex is the future of budgets and is rapidly changing. Who wants to drive with gas when electric is so much more efficient? This train of thought is driving (pun intended) how IT professionals are thinking and responding to the services offered.
Disaster Recovery often gets put on the back burner. Storage and compute have always been the exciting leaders in the data center. Though in a time when cyber threats and ransomware are on the rise, disaster recovery is taking center stage. To make everyone’s life easier, more efficient, and profitable, Disaster Recovery as a Service (DRaaS) has been at the forefront of OPEX budgets.
This cloud computing and backup service model uses cloud resources to protect applications and data from disruption caused by a disaster. An organization that has a complete system backup allows for continuity in the event of failure. A DRaaS solution provides an easy way to move a production workload to the cloud. Once that instance is deployed, it can act as a sandbox for further experimentation. Necessity is the mother of all inventions, but experiment builds the momentum for innovation.
How fast can your organization recover from the moment of a disaster to the moment you return to regular operation? Businesses today have no tolerance for downtime. DRaaS provides a critical bridge, allowing companies to operate remotely while normal processes are restored.
While natural disasters are commonly associated with the need for DRaaS, five of the most common reasons an organization uses DRaaS are:
How much can you save is the new mantra on every organization’s mind while becoming modernized! Disaster recovery is often seen as a burdensome cost when it should be thought of as an investment. For any organization considering transitioning to infrastructure as a service (IaaS), DRaaS can act as a stepping stone to full virtualization.
Carbonite Recover is the DRaaS offering that can help you achieve all these goals. While securely replicating critical systems from a primary environment to the cloud, they ensure an up-to-date secondary copy for failover at any moment. Who doesn’t want to minimize downtime as well as cost? With DRaaS you pay for what you use, when you use it, not for idle resources.
Carbonite’s Recover allows businesses to enjoy all the benefits of resilient IT without owning the hardware or being responsible for maintenance. Modernizing or minimalist, both of which recognize that less means more. Who doesn’t want the freedom of less responsibility!
Today, it is common to combine modern and legacy systems operating side-by-side. Not all DRaaS vendors support legacy systems, but Carbonite continues to be an industry leader in supporting many different legacy platforms. When protecting your environment with Carbonite, you can also count on them to support other platforms such as:
Carbonite’s DRaaS advantages are built-in, allowing multi-sites 100 percent cloud computing. The resources are replicated to many different sites to ensure continuous backup if one or more sites are unavailable. Depending on the customers’ requirements, the ability to be granular or comprehensive can reduce cost with flexible protection.
Not only does Carbonite support a series of legacy and cloud-based platforms, but they also offer more control than competitive ISP solutions. They also provide flexible failover options that don’t require spare machines or extra fees, along with 24/7 phone support. DRaaS is one of those things you can’t afford to not have.
See how one retail chain stays in control with Carbonite Recover in this case study.
Zunesis partnered with Carbonite many years ago, not just as a reseller but as a customer. We have relied on the many advantages of using their products. We can attest to their cutting-edge technology, quality customer service, and competitive pricing. For more information on all the Carbonite products, contact us today.
In the age of ransomware and digital transformation, your company’s data is even more critical than ever to keep your business running. While avoiding data loss is the main priority of backups, it can often be hard to balance that with the cost of those backups. On top of all that, data is harder to manage and control than ever due to multi-cloud infrastructure and workers more often working remotely.
So how do you deal with all of that and still have a trusted backup in case the worst should happen? You need a single, robust solution for data management that can protect your data through all phases of its lifecycle.
Veeam has been a leading player in the backup space for a while now. It has really stepped up when it comes to addressing new challenges in data protection. It has a host of different products, enterprise to consumer, that make it easy for businesses big and small to tackle the issue. They continue to innovate and keep your data secure while remaining flexible enough to fit into any environment. This year is no different with the release of Veeam version 11.
It integrates with VMware environments to eliminate downtime and minimize data loss with a host of new features. CDP will eliminate the need for VM snapshots with I/O level tracking, and reduce the bandwidth needed for replication. It works on any OS or application as long as they are running in a vSphere VM. CDP will also schedule your jobs for you, just define the required RPO and CDP will take care of it. Depending on the amount of data, CDP can also offload data processing from your hosts to proxies. It calculates the required bandwidth to eliminate guesswork.
This keeps your backups safe from malware and hackers with immutable backups. Single-use credentials are never stored in the configuration database eliminating any possibility of those credentials being extracted from a compromised backup server.
This feature reduces the cost of long-term archives. Veeam now integrates with Amazon S3 Glacier and Azure blob archive storage which are best for very long-term storage. These repositories can be made immutable, and are policy-based, so no management required.
It makes even more of your workloads instantly available. Instant recovery has been a feature of Veeam for a while, but now it has been expanded to include SQL and Oracle databases. Regardless of size, databases are made available to production applications and clients in minutes. You can then finalize those recoveries either manually, or by scheduling them to switch as soon as synchronization catches up, or even scheduling the switch during maintenance hours.
Veeam has made many more improvements with Version 11, enhancing many aspects of the program. GFS and Archive backups have added functionality. Powershell is now more powerful. Backup speeds have increased. Compliance with WORM backups has been added. As well, the GUI has seen some improvements. All of these features are included with normal licenses, you don’t need to pay extra for any of this stuff.
I am barely even scratching the surface on what is in the new version. I am most excited about the steps Veeam is taking to make their product more resilient against viruses and ransomware. Ransomware attacks have ramped up over the last few years. Keeping good copies of data in case of such an attack has been a struggle. Companies like Veeam have been in an arms race against hackers. Things like immutable backups are a huge leg up in the fight. Maybe someday, we won’t have to worry about that stuff, but for now, at least we have Veeam.
Contact Zunesis to find out more about Veeam V11.
Every 11 seconds, a network is attacked by ransomware. Each successful attempt costs a company $80,000 on average. This adds up to over $20 billion each year. This number continues to grow each year. Cyber security is quickly becoming one of the most important investments for companies large and small.
These investments come in many forms; training, antivirus programs, spam filters, and backups to name a few. Once your data is compromised though, there is really only one thing you can do. You need to do a restore from a backup.
So, how does Veeam backup help protect data against ransomware?
Immutable backups are copies of your data that cannot be changed. Veeam offers immutability in the capacity tier of their Scale-out Backup Repository (SOBR). It leverages a native function of object storage that prevents blocks of data from being changed for a set amount of time. Not even a malicious admin with full access to backups can change this data, let alone ransomware.
A related concept is air-gapping your backup repository. This basically means backups are unreachable or offline after the backup is taken. A common way of doing this is tape backups. Once the tape is written, it is physically removed from the network. It is stored in a secure location, inaccessible until the tape is moved back onto the network.
Another feature offered by Veeam that is similar is rotated media. This allows to swap hard drives for the backup chains so that one or more hard drives with backup data are offline or air gapped at all times. This protects that set of data from attacks.
Detecting ransomware in its initial stages can be difficult. Veeam ONE provides the ability to monitor your environment closely and be aware of any suspicious or abnormal activity. By analyzing CPU usage, datastore write rate, and network transmit rate, Veeam ONE can help identify higher than normal activity on a particular machine, trigger an alarm, and immediately notify you to inspect the machine.
SureBackup is a feature of Veeam that allows you to create a sandbox to test your backups before restoring them to production. It can run virus and malware scans on backup sets, automatically or manually. It ensures your data is not infected without the need to restore the data somewhere first.
A related feature is Secure Restore, which scans your data as it is being restored. This gives you access to the latest virus definitions which helps safeguard against viruses that were previously unknown at the time of the backup.
Unsure of a workload, or suspect it may be infected? DataLabs gives you the ability to restore the data to a fully secured and isolated environment to test. A fully isolated sandbox lets you run any tests you want without impacting production systems, so you can make sure your workloads are uninfected before you restore them.
Veeam is part of a group of leading hardware and software companies, like HPE, Cisco, and AWS, that work together to make sure their products integrate using the highest security standards possible. They bring together the most powerful recovery solutions to combat ransomware.
Veeam backup and recovery is a powerful tool in the fight against ransomware. It is completely dependent on how it is implemented and used. You should always secure your backup server, follow the 3-2-1 rule, implement Veeam’s features for ransomware detection, protect your network, and test your backups. A good backup strategy is just another piece in the puzzle in the fight against ransomware.
Contact Zunesis to find out more about Veeam backup and recovery solutions for your organization.
There are many facets to a thoughtful plan for maintaining highly available access to your business-critical Data and Applications. The consideration starts with the location of your hardware infrastructure components (Compute, Networking, Storage). Does the facility provide security, cooling, reliable and redundant power, etc.? Are your hosts, storage and network equipment designed with redundancy, i.e, Power Supplies, Fans, Drives, etc.? Does your design include Clustering, Replication, perhaps a Disaster Recovery site? All of these are part of a complete plan.
But, even the most highly available hardware infrastructure is not much use without the Data and Applications it is configured to support. For protection of data and applications, we must have a Backup/Recovery process in place. Often, with Backup/Recovery implementations, the biggest effort is with the initial setup. This is where the software is installed, backup targets are configured, and backup jobs are defined. After that, the jobs get monitored periodically. If the job status is green, then nothing more is done until a file or Virtual Machine (VM) needs to be recovered.
While taking time to plan the jobs and maintain consistent monitoring of them is critical, testing the Recovery of the Data and Applications being protected is equally important. All of us would likely agree we need to validate our Backup data. However, this is a step that is often pushed to the side because of competing priorities in every IT environment. For many IT environments, Backup/Recovery becomes a “set it and forget it” activity. The focus is mainly on the Backup process.
So, perhaps the answer to ensuring we validate the recoverability of what we are backing up is to automate the validation process. At Zunesis, we partner with Veeam to help our clients protect their Data and Applications with Veeam Backup and Replication (Veeam B&R). If you aren’t familiar with Veeam, let me provide a brief summary.
Veeam B&R is a Backup/Recovery application for protecting any workload, including virtual machines, physical servers, Oracle, Microsoft SQL, Exchange, Active Directory, Microsoft SharePoint, NAS, and Cloud. These don’t represent everything that Veeam B&R can protect, but this list should make it clear that Veeam will likely be able to protect any workload in your environment. Furthermore, Veeam has built-in Replication, WAN Acceleration, Integration with many storage arrays, Encryption, Deduplication, Compression, and more.
But the one feature I want to highlight here is Veeam SureBackup. Perhaps you use Veeam and have seen the SureBackup option in the management console but never really explored its capabilities. To summarize, SureBackup is the Veeam technology that lets you test VM backups and validate that you can recover data from them. With SureBackup, you can verify any restore point of any VM protected by Veeam B&R. Using SureBackup, Veeam B&R can boot the VM from the Backup in an isolated environment, scan VM’s for malware, run tests against the VM, power the VM off and create a report on the recovery verification results. The report can then be automatically emailed to you for review.
As referenced below, SureBackup is a feature you would see whenever you are viewing the Veeam B&R Management Console. And like most of the Veeam features, you are guided through its setup using a step-by-step process in the Management Console. The screenshot shown below lists the major steps (in order) for setting up the SureBackup environment.
While it is beyond the scope of this post to walk you through the entire setup, I would like to provide you a summary of the setup using the steps outlined in the screenshot above. Through this Summary, I hope to convey the power of the Veeam SureBackup feature.
It is important to remember that the SureBackup feature utilizes VM’s that are protected by scheduled Veeam Backup Jobs.
Once you have the Backup Jobs defined, you can setup the SureBackup environment to validate that what you’re backing up can be restored when the need arises. So, let’s take a look at the major steps required to implement SureBackup.
The first step in building a SureBackup environment is to Create a Virtual Lab. The virtual lab is an isolated virtual environment in which the backed up Virtual Machines are started and tested. You can create multiple Virtual Labs depending on your needs. During the creation of the Virtual Lab, Veeam B&R will deploy a Linux Appliance that will fence off your Production environment from the Virtual Machines being tested.
The Appliance will act as Gateway, provide DHCP, and Routing to the isolated environment while facilitating access from the Production environment if needed. To accomplish this, the Appliance has network access to both the Production environment and to the Virtual Lab. With the Appliance in place, VM’s can be restored to the virtual lab using the same IP Addressing as they have in the Production environment from which they were Backed up. The Appliance will keep any conflicts from arising between the two parallel environments.
With the Appliance in place, it’s time to create the Application Groups. An Application Group includes the VM’s you want to validate along with any VM’s they may be dependent upon. For instance, if you want to test a SQL Database Server, you will probably want to have a Domain Controller and DNS Server available and perhaps the Application Server. So, the Application Group is the place where you define a working environment for the workloads you want to validate.
With the Virtual Lab(s) and Application Group(s) Defined, it’s time to create the actual SureBackup Job that will build the environment on-demand or based on a schedule. In this step you will specify the Virtual Lab you’ll be using and the Application Group you’ll be including in that Virtual Lab. Then, you can select from the Backup jobs you already have running to specify the VM’s you’ll want to validate.
As part of the Job creation you can configure what you want to test/validate for each VM. Examples of validation criteria include testing the disk content for corruption, scanning VM’s for malware, and performing PING tests. During the setup you are able to select predefined test scripts or include custom scripts to use for testing. Once all the components have been defined, you can schedule when you want the jobs to run (Daily, Weekly, Monthly). You will also decide to whom the Job results should be sent.
So, as you can see, the SureBackup environment will take a little time and planning to build and test. The benefits are well worth the effort.
It provides an automated method of validating Backups. Its design allows for the Virtual Lab to be created on-demand. This is an environment where one can test server and software updates, perform security testing, and conduct DevOps and Analytics. This is all done without impacting your Production environment. Veeam calls this capability the On-Demand Sandbox.
If you already use Veeam B&R, but haven’t tried the SureBackup option yet, I hope this post has encouraged you to give it a try. If you do not currently use Veeam, I hope your interest is peaked and you want to learn more. In either case, Zunesis has Solution Architects who can help you. We have Veeam B&R deployed in our lab so you can explore for yourself the SureBackup functionality. You can get a better understanding of this important piece of a thoughtful plan to maintain highly available access to your business-critical Data and Applications.
As we approach day [xyz] of the plague, I was ready to write another blog post about COVID-19 and technology. It seems that all we can think about lately is the virus. Working from home with three kids under 10 years old certainly has been “fun” for me. I’ll definitely be glad once this thing is gone.
Instead, I’d like to take some time to talk about ransomware. Another currently rampant plague of the digital variety. Among malware, ransomware is some of the absolute worst of the worst. It certainly has it’s own place in H-E double hockey sticks.
At a time where people and businesses are already suffering, we are seeing an uptick in ransomware attacks. Encryption of your files occurs, and cyber criminals demand a ransom in order to decrypt them. Often times, organizations use military grade encryption. So, the only way to decrypt the files is to pay the ransom.
Since only the criminals have the required decryption keys, it would be nearly impossible to decrypt even with your handy dandy cereal box decoder ring. Unfortunately, paying the ransom is a risky proposition. There is no guarantee that your files will be decrypted. This also validates the cyber criminal business model and encourages bad actors.
Ransomware spreads like fire, and burns the building to the ground if you don’t prepare.
First of all, you REALLY should have good backups. This doesn’t prevent the ransomware attack, but it certainly prevents you from needing to either a)open up your wallet or b)lose important data.
You might be surprised how many of us don’t follow rule #1 for data. Backups should be available locally, as well as off-site/cloud. You should also make sure that you can restore multiple points in time. This is in case your more recent backups contain ransomware. This isn’t just best practice for ransomware, it is just good practice in general.
Whether it comes in the form of ransomware, hard drive failure, data corruption, or space aliens shooting lasers at your PC, you really should have a plan for your data. How much is your data worth to you? For the ransomware event, skip the heartburn and restore from backup prior to an attack.
Prevent ransomware with good personal cyber hygiene
Be proactive with cyber security. Here are some suggestions:
Hopefully this will always be theoretical, and you never get hit. First of all, you definitely want to isolate the machine. This stuff will scan your ARP tables, your registry, and a variety of other sources to look for other hosts to infect. I’d say immediately power off, enter the nuclear codes, and kill it with fire. In other words, wipe/erase the machine. You can then move forward with rebuilding the OS and restoring your data once you’ve got a blank canvas. Just because your security scan came up clean does not 100% guarantee a malware free result.
Next, if there are other machines on the network, quarantine and examine them. Ransomware will proactively work to infect everything else it can on the network. If other machines are impacted, they should also be nuked and rebuilt. This includes your business critical servers. Actually, this is especially critical for business critical systems. These systems house critical data, and are often a central point of access(points of infection) by many users. YES, THIS IS PAINFUL. However, if you have good backups to restore from, it isn’t nearly as big of a deal.
Much like the human pandemic that we are all too familiar with, hopefully you are “distancing” yourself from the digital pandemic. The best way to beat a ransomware attack is prevention, not reaction after the fact when it’s too late. If you need help preparing, or even just a second set of eyes to review your existing strategy, please contact us for an assessment. We are here to help.
Office 365 has become one of the most popular cloud-based productivity platforms. According to a recent study performed by Barracuda, “Market Analysis: Closing Backup Recovery Gaps”, more than 60% of IT professionals are using it to drive business success in some fashion. Email is the most popular (78%), followed by OneDrive (60%), SharePoint (50%), Teams (36%), and OneNote (35%).
Microsoft has done a good job in creating “Best Practices” for Office 365 Tenant Security. On January 6, 2020, they released the “Top 10 ways to secure Office 365 and Microsoft 365 Business Plans.” Its aim is to help secure organizations achieve the goals described in the Harvard Kennedy School Cybersecurity Campaign Handbook.
One glaring omission, not purposely according to Microsoft, is backup and retention of Microsoft 365 data. Microsoft does not hide the fact that they do not backup or provide long-term retention of Microsoft 365 data.
That’s right, Microsoft does not provide backup or long-term retention of Microsoft 365 data.
Let that sink in.
Microsoft does not provide backup or long-term retention of Microsoft 365 data.
An estimated 40%, that’s right 40%, of Microsoft 365 organizations aren’t using any third-party backup tools to protect their mission-critical data. Mostly due to a major misconception that Microsoft is backing up their data for them.
In other words, while Microsoft provides a resilient SaaS infrastructure to ensure availability, it does not protect data for historical restoration for long. Its SLAs don’t protect against user error, malicious intent or other data-destroying activity. In fact, deleted emails are not backed up in the traditional sense. They are kept in the Recycle Bin for a maximum of 93 days before they’re deleted forever. If a user deletes an email, and the retention period is reached, that email is gone forever. If a user deletes their whole mailbox, the admin doesn’t realize, and the retention period is reached, the whole mailbox is gone.
On SharePoint and OneDrive, deleted information is retained for a maximum of 14 days by Microsoft. Individuals must open a support ticket to retrieve it. SharePoint and OneDrive are unable to retrieve single items or files. They must restore an entire instance. It’s unlikely that such short retention policies will meet most compliance requirements.
Many assume that Microsoft will support their backup requirements for Office 365 data. This could be a costly mistake. If they suffer a serious incident, they could find that crucial data has been deleted permanently. There are plenty of advanced, cost-effective third-party backup and recovery solutions for Office 365. IT Managers should revisit their backup strategies to ensure there are no gaps in coverage, especially in cloud-based applications, such as Office 365.
It’s 2020, the holidays are over and you’re back to managing your organization’s IT needs in support of their core initiatives. So, what’s on your mind? For many of our Clients, this can be summed up by three questions:
Ransomware is a reality for individuals and businesses alike; no person or entity is immune. To someone responsible for protecting an entire organization from a Ransomware attack, the specter is ever-present. One that requires 24/7 vigilance. But these same individuals are keenly aware of that. Despite all their efforts to keep the attack from happening, they may be called upon to recover from an attack. We hear about this topic so much from our Clients that there are two BLOGS on the Zunesis website focusing on it exclusively. I would encourage you to read both Posts.
Mitigating the risks associated with Ransomware attacks requires a diligent adherence to a set of practices that include (but are not limited to):
If you are compromised, rather than paying a ransom, you’ll want to provide your organization with their best chance for recovery of your data. To accomplish this, you’ll need to spend time reviewing your backup/recovery and disaster recovery plans.
When reviewing your plans look for how they address the following:
While not exhaustive, the points outlined above, emphasize the multi-faceted approach an Organization needs to take in order to give themselves the best chance of avoiding the consequences of a Ransomware Attack. As I stated earlier in this post, Ransomware is top of mind for all our Clients and we will likely spend a lot of time working with them on this in 2020.
The challenge of not having enough resources and time have been a persistent issue in IT. I’ve been working in the industry for over 35 years and it seems there has never been enough money, time, or people to execute on the strategies developed to evolve and maintain the IT needs of an organization. In 2020 that is certainly not going to change.
The fact is, IT will always compete for the resources of the Organization because, for most organizations, their Mission Statement has nothing to do with building a world-class IT infrastructure. However, organizations across industries are more reliant than ever before on technology to carry out their primary Mission. For this reason, there will be an increasing array of projects that ultimately will need to be carried out by IT; the challenge of efficient resource utilization is not going to abate any time soon.
In the next decade, we will no doubt continue to see the evolution of how and where IT resources are utilized. After all, Digital Transformation is a journey, not a destination. More organizations are moving toward becoming Data-Driven, (leveraging Artificial Intelligence and Data Analytics to glean customer insights and make better decisions).
With this move, we will see the proliferation of Edge Computing devices, leveraging of IoT, and Machine Learning. These technologies will push us to adopt different strategies for on-premise and Cloud-based Compute, Network and Storage resources. For some IT organizations this will be a continuation of what they’ve already begun and for others it may mean a complete revamp of their existing infrastructure.
In the midst of protecting your organization from the bad actors, executing on new projects and maintaining the day-to-day tasks that are part of every IT organization, you and your team need to stay up with a constantly evolving industry that presents you with a myriad of options for continuing your Digital Transformation Journey. You can’t ignore the advances in technology, nor the relevance they might have for your organization, but finding the time to understand them and assess their value won’t be easy.
Of course, there is no one response that can answer any of the topics mentioned above. However, Zunesis has been partnering with our Clients to navigate difficult problems since 2004. As technologies have evolved, so have our abilities to address the needs of our Clients to support their IT Infrastructure, including the issues summarized here.
Whether you just want to sit down and discuss what’s on your mind, or you have already identified an area we can jump in and help, we are ready to engage. Just to give you an idea of what we have to offer, I’ve included a summary of some of the practices we have developed over 15 years to help our Clients achieve their goals.
NOTE: For any service we provide (one-time or ongoing), there is a standard process and set of deliverables we use as a starting template. From there, we will work with you to customize the service based on your specific needs. If there is one thing we know for certain, you have unique circumstances. We want to make sure our services conform to your specific needs.
IT Infrastructure Assessment – The objective for this assessment is to provide an analysis of where your infrastructure is today, where you want to see it in the future, and what will be required to bridge the gap.
Typical Tasks and Deliverables include:
BC/DR Assessment – The objective of this assessment is to provide an analysis of your current Backup Recovery and Disaster Recovery architecture and processes. Because Ransomware is such a threat, we will conduct this assessment with a sub-focus on recovery from Ransomware attacks.
Typical Tasks and Deliverables here include:
Recurring Data Center Advisory Service (RDCAS) – The objective of this service is to provide ongoing management of our Clients HPE environment. We monitor their device firmware and configuration. This helps them maintain best practices per HPE documentation. This service is provided over the course of a 12 Month period.
Typical Tasks and Deliverables include:
Again, these are just examples of the ways we have helped our Clients address their challenges over the last 15 years. We have a team of technology professionals that are ready to assist you with all your infrastructure needs.
Have a great 2020. We look forward to hearing from you.