History of Windows Updates

 

 

Microsoft Windows has been a staple in the PC industry for over 30 years. Who can forget the oldies but goodies (not including server or mobile versions):

 

• Windows 1.0 – 2.0 (1985-1992)
• Windows 3.0 – 3.1 (1990-1994)
• Windows 95 (1995)
• Windows 98 (1998)
• Windows ME (2000)
• Windows NT 3.1 – 4.0 (1993-1996)
• Windows 2000 (2000)
• Windows XP (2001)
• Windows CE (2006)
• Windows 7 (2009)
• Windows Phone (2010)
• Windows 8 (2012)
• Windows 10 (2015)

 

 

Of the more than 2 billion PCs which exist in the world, Microsoft dominates the operating systems running on them (according to NetMarketShare):

 

• Windows 10 – 43.86%
• Windows 7 – 36.47%
• Windows 8.1 – 4.18%
• Windows XP – 2.37%
• Windows 8 – 0.79%
• Windows Vista – 0.17%

 

 

Windows Update

 

 

Microsoft introduced Windows Update with Windows 98. It would check for patches to Windows and its components, as well as other Microsoft products such as Office, Visual Studio and SQL Server.

 

 

Windows Updates had two problems.

1. Less experienced users did not know about it as it had to be installed separately.

2. Corporate users had to update every machine in the company but also had to uninstall patches as they often broke existing functionality.

 

 

Patch Tuesday

 

 

Microsoft introduced Patch Tuesday in October 2003 to reduce the cost of distributing patches. Tuesday was chosen because of the time available before the weekend to correct issues that arise with the patches. This left Monday to take care of any unanticipated issues from the preceding weekend.

 

 

At Ignite 2015, Microsoft announced a change to distributing security patches. Home PCs, tablets, and phones would get the security releases as soon as they were ready. While enterprise customers stayed on the Patch Tuesday monthly cycle – retooled as Windows Update for Business.

 

 

Modern Lifecycle Policy

 

 

Windows 10 saw another change to update distribution. Microsoft released a new version of Windows 10 twice a year.  A “Modern Lifecycle Policy” was created which stated Home and Pro versions of Windows 10 will be provided with security and feature updates for up to 18 months after release, enterprise for 24 months.

 

 

According to Microsoft, “a device needs to install the latest version (feature update) before current version reaches end of service to help keep your device secure and have it remain supported by Microsoft”.

 

 

Through it all, there remained a constant. The potential for an update to cause unintended results, even breaking the machine they intended to fix.

 

 

Last year alone, Windows 10 had at least two serious issues that emerged once the final builds were released. Microsoft had to delay the April 2018 Update because of unexpected “Blue Screen of Death” issues. The October 2018 Update was pulled days after users discovered the upgrade deleted files.

 

 

On April 4, 2019, Microsoft released a new policy to give users greater control of the installing updates in Windows 10.

 

 

Improving the Windows 10 Update Experience

 

 

“We will provide notification that an update is available and recommended based on our data, but it will be largely up to the user to initiate when the update occurs.”

 

 

When Windows 10 devices are at, or will soon reach, end of service, Windows update will continue to automatically initiate a feature update. This keeps machines supported and receiving monthly updates which are critical to device security and ecosystem health.

 

 

New features will empower users with control and transparency around when updates are installed. In fact, all customers will now have the ability to explicitly choose if they want to update their device when they “check for updates” or to pause updates for up to 35 days.”

 

 

Some of the features they are utilizing to provide this control are:

 

 

Download and install now option – Gives the users the ability to enjoy feature updates as soon as Microsoft makes them available.
Extended ability to pause updates – Allows a user to pause both feature and monthly updates for up to 35 days (seven days at a time, up to five times). Once reached, users will need to update their device before pausing again.
Intelligent active hours – Avoid disruptive update restarts. To further enhance active hours, users will now have the option to let Windows Update intelligently adjust active hours based on their device-specific usage patterns.
Improved update orchestration – Improve system responsiveness by intelligently coordinating Windows updates and Microsoft Store updates, so they occur when users are away from their devices to minimize disruptions.

 

 

 

 

 

 

Microsoft is expanding its focus on quality by expanding release preview. This allows for more feedback and insights on capabilities and expanding interaction with the ecosystem partners including OEMs and ISVs.

 

 

Microsoft thanks their many millions of users for providing feedback. This allowed for early detection of low-volume, high-severity issues. A new public dashboard was created for increased issue transparency. It provides clear and regular communications with their customers on status and when there are issues.

 

 

Commercial customers will see the updates ready in late May, beginning with the servicing period for version 1903 of Windows 10.  If you are part of the Windows Insider Program, you probably already have the release.

 

 

For more information on what is included with the May 2019, one of the better guides can be found here.

Increased Vulnerability

 

 

Identifying what connects to the network is the first step to securing your enterprise.  Control through the automated application of wired and wireless policy enforcement ensures that only authorized and authenticated users and devices are allowed to connect to your network.  At the same time, real-time attack response and threat protection is required to secure and meet internal and external audit and compliance requirements.

 

 

 

Laptops, smartphones, tablets and Internet of Things (IoT) devices are pouring in the the workplace.  The average employee now utilizes an average of three devices.  The addition of IoT increases the vulnerabilities inside the business adding to the operational burden.

 

Wired and Wireless Devices

 

 

The use if IoT devices on wired and wireless networks is shifting IT’s focus.  Many organizations secure their wireless networks and devices. Some may have neglected the wired ports in conference rooms, behind IP phones and in printer areas.

 

 

Wired devices – like sensors, security cameras and medical devices force IT to think about securing the millions of wired ports that could be wide open to security threats.  Because these devices may lack security attributes and require access from external administrative resources, apps or service providers, wired access now poses new risks.

 

 

As IT valiantly fights the battle to maintain control, they need the right set of tools. Tools that can quickly program the underlying infrastructure and control network access for any IoT and mobile device – known and unknown.

 

 

Today’s network access security solutions must deliver profiling, policy enforcement, guest access, BYOD onboarding and more. They should offer IT-offload, enhanced threat protection and an improved user experience.

 

 

Mobility and IoT are Changing How We Think About Access Control

 

 

The boundaries of IT domains now extend beyond the four walls of business and the goal for organizations is to provide anytime, anywhere connectivity without sacrificing security.

 

 

How does IT maintain visibility and control without impacting the business and user experience?  It starts with a 3-step plan.

 

 

  1. Identify – what devices are being used, how many, where they’re connecting from, and which operating systems are supported. This provides the foundation of visibility.  Continuous insight into the enterprise-wide device landscape and potential device security corruption.  Which elements come and go gives you the visibility required over time.
  2. Enforce – accurate policies that provide proper user device access, regardless of user, device type or location; this provides an expected user experience. Organizations must adapt to today’s evolving devices and their use, whether the device is a smartphone or surveillance camera.
  3. Protect – resources via dynamic policy controls and real-time threat remediation that extends to third-party systems. This is the last piece of the puzzle.  Being prepared for unusual network behavior at 3 AM requires a unified approach that can block traffic and change the status of a device’s connection.

 

 

 

Organizations must plan for existing and unforeseen challenges.  With their existing operational burden, it’s not realistic to rely on IT and help desk staff to manually intervene whenever a user decides to work remotely or buy a new smartphone.  Network access control is no longer just for performing assessments on known devices before access.

 

 

 

Aruba ClearPass

 

 

The stakes are high. It’s surprising that more companies have not embraced secure NAC to prevent malicious insiders from causing damage to the enterprise.  The uses cases are many-control devices connectivity, simplify BYOD, secure guest access leads to the same answer, Aruba ClearPass.

 

 

 

 

 

Over 7,000 customers in 100 countries have secured their network and their business with Aruba ClearPass.  They have achieved better visibility, control and response.  Shouldn’t you? Contact Zunesis to find out how you can secure your network.

 

 

 

 

Improving Higher Availability

 

 

Regardless of the organization size, every one of our clients is continually assessing ways to make their IT environment more highly available. Depending on budgets, the level of availability considered can vary widely. But, whatever the budget, improving availability is an important endeavor for all organizations and the right approach is usually multi-faceted.

 

 

To be clear, I am not talking about backup/recovery or keeping data offsite in case of disaster. Rather, I’m focusing here on ways to keep systems up even when a part of the infrastructure fails; maintaining continuous operation.

 

 

This week, I want to start by listing a few of the more common solutions to achieve higher availability but focus on one that leverages an industry standard.  HPE mid-range storage arrays provide an affordable and easy way to deploy solutions for high availability.

 

 

Base Level of Availability

 

 

The base level of availability for all our clients begins with solutions that provide no-single-point of failure in the primary data center. In many cases, this is as simple as assuring all storage is protected by some level of RAID and that there are multiple paths to the network and to storage. This level of availability also means that all hardware components have redundant fans and power and that they are connected to redundant power distribution units in the rack.

 

With so many organizations using colocation services today, power redundancy can affordably be expanded to include multiple power grids and generators. So, you can see that even at the base level, providing high availability has multiple facets.

 

 

Host/OS Clustering

 

 

Usually, the next level of availability is some type of host/OS clustering (Microsoft Clusters, ESXi Clusters, etc.). Again, because the  use of colocation services is becoming so prevalent, stretching these clusters across geographic distances is often an affordable consideration. And, since we are talking here about maintaining continuous availability, the latency between sites should be very low to facilitate a stretch cluster.

 

 

These types of clusters are often active/active and will serve as failover sites for one another. It is this kind of infrastructure that supports an availability solution from HPE called Peer Persistence.

 

 

HPE Peer Persistence

 

 

An HPE Peer Persistence solution allows companies to federate Storage systems across geographically separated data centers at metropolitan distances. This inter-site federation of storage helps customers to use their data centers more effectively by allowing them to support active workloads at both sites. They can move data and applications from one site to another while maintaining application availability even if one side goes offline completely.

 

 

In fact, Peer Persistence allows for planned switchover events where the primary storage is taken offline for maintenance or where the workloads are simply moving permanently to the alternate site. In any event, the failover and failback of the storage is completely transparent to the hosts and the applications running on them.

 

 

This capability has been available on HPE 3PAR StoreServ arrays for over five years. And, now, with the latest release of the Nimble OS (5.1), Peer Persistence is supported on the Nimble Platform.

 

 

ALUA Standard

 

 

The basis for Peer Persistence is the ALUA standard. ALUA (Asymmetric Logical Unit Access), allows paths to a SCSI device to be marked as having different characteristics. With ALUA, the same LUN can be exported from two arrays simultaneously.  Only the paths to the array accepting write to the volume will be marked as active.

 

 

The paths to the secondary side volume (the other array) will be marked as standby. This prevents the host from performing any I/O using those paths. In the event of a non-disruptive array volume migration scenario, the standby paths are marked as active. The host traffic to the primary storage array is redirected to the secondary storage array without impact to the hosts.

 

 

Components Needed

 

 

Whether using HPE 3PAR StoreServ or Nimble, Peer Persistence is possible using certain components. First, you must have two arrays that support synchronous replication. In this case, we are talking about either HPE 3PAR StoreServ or Nimble arrays (3PAR requires another 3PAR and Nimble requires another Nimble).

 

 

Beyond that, you’ll need:

 

 

  • RTT (round trip time) Latency of 5ms or less between the sites
  • Hosts that can support ALUA. Those include:
    • Oracle RAC
    • VMware
    • Windows
    • HP-UX
    • RHEL
    • SUSE
    • Solaris
    • AIX
    • XEN Server
  • A Quorum Witness – This component is software deployed in a third site that receives ongoing status from each array in the Peer Persistence relationship to help define when a failover needs to take place.

 

Demo- Please!

 

 

At this point, I had planned on providing a drawing and an explanation of how Peer Persistence utilizes the components mentioned above. However, instead I’m including two videos that do a great job of showing the Peer Persistence solution.

 

 

 

 

 

 

Why Peer Persistence?

 

 

If you already utilize 3PAR or Nimble in your infrastructure, then you should consider this solution to improve your availability. It is a simple way to achieve high availability utilizing a storage solution with which you are already familiar. If you are considering a storage refresh, Peer Persistence is reason to explore either 3PAR or Nimble as part of your infrastructure.

 

 

Zunesis can show you this technology first hand in our lab. And, you can see a case-study on our website where we successfully deployed this solution.

 

 

 

 

 

 

 

 

 

 

GET IN TOUCH

EMAIL: info@zunesis.com

     

    

CORPORATE OFFICE

Zunesis, Inc.
4B Inverness Ct E Suite 100,
Englewood, CO 80112
(720) 221-5200

Las Vegas
6671 Las Vegas Blvd S
Building D Suite 210, Office 260
Las Vegas, NV 89119
(702) 837-5300

Copyright © 2023 Zunesis. All Rights Reserved. | Website Developed & Managed by C. CREATIVE, LLC