Clients      Employees

August 25, 2015

No Comments

ICG-minimize-downtime

How SMBs Can Minimize the Risk of Downtime

Even the smallest of small-to-midsize businesses (SMBs) have finally let go of their old-fashioned paper filing systems, and now store most of their data electronically. Many have become more advanced, using sophisticated applications and leveraging data in new ways to support business strategy. Of course, advanced use of technology and data requires more advanced data protection.

 

Even if you have implemented a data backup solution, you’ll lose a certain amount of data because of an outage, whether it’s caused by equipment failure, a major weather event, or a downed utility pole outside of your office. How much data can you afford to lose? How fast can your data be restored? What kind of impact will an outage have on your business?

 

By establishing recovery point objectives (RPOs) and recovery time objectives (RTOs), organizations provide concrete answers to these questions. The RPO is the maximum age of a file that needs to be restored in order to resume business operations. In other words, RPO tells you how much data loss will be tolerated. For example, if a certain type of data is backed up every night at 10 pm and that system crashes tomorrow at 1 pm, any data changed between 10 pm and 1 pm will be lost.

 

The RTO is the maximum period of time that an application, service or network can be unavailable after a failure occurs. Basically, the RTO tells you how much downtime and lost revenue your organization can tolerate. Of course, the impact of downtime isn’t just financial. A prolonged outage can affect the confidence of customers, business partners and vendors.

 

RPOs and RTOs help you determine how frequently backups should occur, what kind of backup infrastructure you need, and what your disaster recovery strategy should be. Generally speaking, as RPOs and RTOs become shorter, the risks associated with downtime and data loss are reduced.

 

There are several technologies organizations can use to meet the increased demand for faster RPOs and RTOs:

 

  • A snapshot is a group of markers that point to stored data, creating a virtual copy of that data as it existed at a particular point in time. Unlike backups, snapshots can be performed while systems are online. They also provide faster data restore times.
  • Recovery-in-place, or instant recovery, redirects the user workload to a backup server so data can be restored immediately on a backup virtual machine. When the data is recovered, the workload is shifted back to the original virtual machine.
  • Replication is typically required when recovery-in-place doesn’t restore data quickly enough. This technique updates a secondary image on a separate storage platform, which is booted when a failure occurs so critical applications can be recovered almost instantly.
  • Copy data management reduces storage consumption by saving just the primary data and a single backup. Additional virtual copies can be created on an as-needed basis using a snapshot mechanism without changing the primary or backup copy.

 

Without an advanced data protection infrastructure and strategy, downtime can potentially cripple an SMB. Let the experts at ICG help you better understand these issues and technologies so you can implement a solution that minimizes the risk of downtime.

February 19, 2015

No Comments

ICG-cloud-backup-remote-office

Why Cloud Backup Makes Sense for Remote Offices

More and more organizations are geographically dispersed, from branch or satellite offices to remote workforces operating from home. That means more and more corporate data, which doubles in volume every 18 months, is being created outside of company headquarters. Much of this data is mission-critical and must be protected and backed up regularly.

 

However, remote office/branch office (ROBO) backup is much different than data backup at the primary data center and presents a new set of challenges. ROBOs rarely have the dedicated IT staff, infrastructure, bandwidth, clearly defined best practices, and regular testing that exists at company headquarters. As a result, application performance and user productivity tend to suffer. ROBOs will often compensate by deploying their own solutions on a smaller scale, but having disparate systems becomes difficult to manage and scale. ROBO environments are typically less secure, making them more susceptible to data loss and compliance issues.

 

When developing a ROBO backup strategy, organizations need to understand the potential impact on business operations if the backup strategy fails instead of waiting for something to go wrong. Approach data backup as you would with your primary data center. Classify and prioritize data. Plan for the worst and identify the various causes for disaster, including server and storage failure, human error, data corruption, natural disasters, and security breaches. Define recovery objectives for all applications and data by analyzing the how long it would take for operations to be impacted if data or services are lost. Then determine how prepared you are to meet those objectives and regularly test your backup strategy so you can plug any holes.

 

Traditionally there have been two general options for a ROBO backup strategy. Onsite backup typically involves software that backs up data to disk, which is faster, or tape, which is cheaper. A centralized backup model uses remote backup tools to back up ROBO data to a centralized site over the corporate WAN.

 

An increasingly popular third option is cloud backup. Cloud backup uses software to automatically gather, compress, encrypt and send a copy of data via the Internet to a service provider’s offsite server. Instead of purchasing and maintaining a backup system and worrying about under- or over-provisioning storage capacity, an organization pays a monthly fee for virtually unlimited capacity from a service provider. Scalable, elastic resource allocation makes it easy to handle uneven data usage patterns and high data volume, and the organization only pays for storage used. Cloud backup is also ideal for organizations that lack the IT staff, infrastructure, budget and bandwidth required for centralized backup.

 

A service provider’s security is typically far superior to ROBOs, and the provider’s infrastructure is monitored 24/7 and tested frequently. On-demand data restoration supports optimal recovery objectives, and users have the flexibility to access data on any device from any location, which is especially valuable for ROBO users. New software and functionality can be added or updated instantly without requiring a ROBO’s limited IT staff to worry about each rollout.

 

ICG’s managed cloud backup solution keeps ROBO data secure and accessible while reducing capital and operational costs. Let us show you how our flexible, scalable solution brings the reliability and performance of primary data center backup to your branch offices and remote locations.

 

 

January 13, 2015

No Comments

ICG-Data-Archival

Why Every SMB’s Resolutions Should Include Data Archival

The new year is a popular time for small-to-midsize businesses (SMBs) to get organized. An important part of this process is figuring out how to use technology to reduce the amount of physical “stuff” in the office, from calendars and sticky notes to cardboard boxes filled with paper files.

 

Although certain data must be retained for compliance purposes, many SMBs just keep everything as a precaution. Whether that data is stored physically or digitally, adding storage space on a regular basis can get expensive as the amount of data being produced continues to skyrocket. SMBs need to determine which data serves a legal purpose or has strategic value, and treat that data as a business asset that is properly stored and protected.

 

Because primary storage capacity isn’t unlimited, developing a sound data archival strategy is essential. Data archival is the process of identifying and moving data that is no longer actively used from primary storage to secondary storage for long-term retention. Data archival is sometimes confused with data backup, which is the process of copying data to a separate storage system so the data can be restored in case of equipment failure or disaster.

 

Data archival brings valuable business benefits that extend far beyond reducing clutter and becoming more organized. Because secondary storage costs less than primary storage, data archival reduces storage costs. By moving data to secondary storage, organizations eliminate the need to repeatedly back up data that hasn’t changed, allowing active data to be backed up more frequently. Data archived to meet regulatory compliance requirements will be well-protected against tampering and remain accessible. Data archival also offers strategic business value because it enables organizations to store large volumes of data for analysis. This data can provide valuable customer and operational insights that can be used to create competitive advantages.

 

Although the concept of data archival may seem simple, effectively archiving data is a complex process that requires an understanding of proper archival policies and best practices. The first step in developing an archival strategy is identifying what data should be archived and for what purpose. Again, it may seem simple to archive all data that hasn’t been updated for a certain period of time, but there are a number of factors to consider. For example, the methods for archiving email, database and file data are very different.

 

Also, what is the lifecycle of various types of data? Can certain data be deleted instead of archived? If data is archived, how long must it be retained? For example, compliance data will most likely need to be retained for a longer period than human resources data. A data archival policy without a deletion policy can become unnecessarily costly in terms of wasted storage space and time spent searching through extra data.

 

Another factor to consider is accessibility. In order to maintain the integrity of archived data, organizations must establish clear guidelines that explain who may access various types of archived data and for what purpose. Once the types of data to be archived, the lifecycle of various forms of data, and accessibility policies have been identified, organizations should evaluate storage media and software based upon cost, control, performance and other factors. Cloud-based storage for data archival is quickly becoming a popular choice of SMBs.

 

ICG makes it possible for SMBs to realize the benefits of effective data archival while minimizing the need to purchase and maintain additional hardware as storage demands grow. Let us help you design and implement an archival plan and choose the solutions that make the most sense for your organization.

December 19, 2014

No Comments

ICG-IaaS-disaster-recovery

Why IaaS Makes Sense for Disaster Recovery and Business Continuity

The ability of an organization to recover company data and applications and continue business operations with minimal disruption is absolutely essential. If a major power outage, hurricane, flood, fire or security breach occurred, how long would your organization be out of commission? Would it take minutes, hours, days or weeks to recover? How much money would be lost? How much would your company’s reputation suffer?

 

Increasing technology investments and an increasing reliance upon real-time data and communication have led many organizations to update and overhaul their disaster recovery and business continuity strategies. Challenged to minimize the impact of disaster while simultaneously reducing costs, more and more organizations are finding that the best solutions reside in the cloud. In fact, a recent report from Infiniti Research estimates that the global infrastructure-as-a-service (IaaS) market will grow approximately 43 percent annually over the next five years, driven in large part by disaster recovery and business continuity planning.

 

IaaS is a model that enables an organization to leverage a third-party service provider’s technology, including storage, servers and networking infrastructure. The provider is responsible for managing, updating, maintaining and securing this infrastructure. Business applications, operating systems and other tools can be controlled by the organization’s IT department through an online management console.

 

The first and most obvious benefit of IaaS is that it significantly reduces capital hardware investments and operational costs for maintenance, power and cooling. Organizations pay for what they need and can automatically scale resources up or down according to business requirements. Rather than investing in IT resources that may only be used periodically, workloads can be transferred to the cloud during peak periods.

 

From the perspective of disaster recovery and business continuity, IaaS provides redundancy that removes the risk of having a single point of failure. Instead of spending days waiting for the data center to get back up and running, the service provider simply shifts IT resources to remote infrastructure, which can be accessed through any secure Internet connection. IaaS ensures a reliable IT environment with little or no disruption to business operations. Security, traditionally a top concern for organizations considering cloud deployments, is typically more robust when using the IaaS model.

 

Simply put, IaaS minimizes the risk of an outage while reducing capital and operational costs. Let ICG show you how to use the power, flexibility and cost-efficiency of IaaS in your disaster recovery and business continuity strategies.

October 2, 2014

No Comments

ICG-Branch-Offices

Are You Struggling to Manage Branch Office IT?

The two biggest IT priorities for organizations with branch offices are providing a consistent end-user experience and improving security. But while most remote sites have basic IT infrastructure and services, they typically rely on the main office for IT resources and support. Failure to provide branch offices with the same capabilities as the main office can drag down productivity and increase risk for the entire organization.

 

Branch office users require the same level of performance and secure access to applications, data and services as the main office. They must be able to securely collaborate with other employees and have the same level of responsiveness when IT issues arise. At the same time, data must be backed up and regulatory compliance must be maintained.

August 20, 2014

No Comments

Why More SMBs are turning to Cloud-Based Data Backup

 

ICG-cloud-backup-

How lax are small-to-midsize businesses (SMBs) when it comes to data backup? According to a study by AVG Technologies, owners and managers spend more time straightening up their desks or ordering business cards than they do on backing up data. Although 30 percent believe more than half of their data is sensitive, one in four don’t even require a weekly backup.

 

Another study found that 53 percent of organizations don’t conduct daily backups. Approximately one-third of administrators feel this is not an efficient use of their time. These are the people organizations rely upon to protect their data.

 

Finally, a large percentage of SMBs don’t implement a data backup and disaster recovery plan until after disaster strikes, according to the 2013 State of Cloud Backup study from Itronis. That’s like buying homeowners insurance after your house burns to the ground.

July 29, 2014

No Comments

Understanding the High Cost of Downtime

 

ICG-cost-of-downtime

Most businesses today depend upon the availability of their computer systems — and that dependence creates tremendous risk. Downtime can and will occur, whether caused by weather-related disaster, power interruption, fire, water damage or human error. The cost and disruption to operations can be devastating.

 

According to a recent study by the Ponemon Institute, unplanned outages in U.S. data centers cost large organizations just over $7,900 per minute on average in 2013, up 41 percent from 2010. The average incident lasted 86 minutes, resulting in an average cost per incident of roughly $690,000. Those numbers were calculated from an analysis of 67 U.S. data centers with a minimum size of 2,500 square feet.

 

Understandably, those figures are a little hard for small business owners to fathom. If you have just a handful of servers and a couple dozen PCs, unplanned downtime isn’t going to be anywhere near that expensive. But the costs used to derive those figures apply to businesses of any size, and put into perspective what downtime can mean to your business.

July 10, 2014

No Comments

Business Continuity, Part 1: The Risk of Poor Planning

 

ICG-business-continuity

Barely a month into hurricane season, we’ve already had one storm leave its mark as Hurricane Arthur left hundreds of thousands of people without power in the U.S. and Canada. If your data center experienced an outage, how much time would it take to recover and access the data and applications required to operate? How much would it cost your organization?

 

While hurricanes and tropical storms aren’t the only causes of disaster, they serve as a stark reminder of the importance of business continuity and disaster recovery planning. The statistics show most organizations are unprepared even though unplanned downtime is virtually inevitable. Research from the Ponemon Institute found that 95 percent of companies experienced a data outage within the past 12 months. Another study from Gartner revealed that approximately one in four organizations have experienced a full data disaster.

February 21, 2014

No Comments

Data De-Duplication Is No Longer Just for Backup

ICG-data-deduplication-blog-header

If a document is emailed to 500 people within your organization, do you think it makes more sense to store all 500 copies of that document… or just one?

Data de-duplication eliminates copies of data so storage space isn’t wasted with multiple instances of the same data. Using the above example, only one copy of the document would be stored instead of 500. The remaining 499 documents would be replaced with a pointer to the single stored document.

Traditionally utilized as part of backup and archival processes, the removal of redundant data can reduce the amount of data needing to be backed up by 90 percent or more. Because significantly less data is transmitted for remote backup and disaster recovery purposes, bandwidth requirements also drop by up to 99 percent. By conserving storage capacity and bandwidth, data de-duplication enables organizations to reduce storage costs and recover data faster.

While the benefits are significant, data de-duplication hasn’t been widely used in primary storage, which houses data in active use. This has been largely due to performance issues and data integrity concerns. Also, far more redundant data was eliminated during the backup process, so data de-duplication solutions focused on that tier of storage.

However, thanks to evolving data center technology, primary storage data de-duplication is likely to produce more business value today than it might have five years ago. Increasingly virtualized environments are producing more redundant data, while cloud storage requires smaller volumes of data in order to transfer data efficiently. At the same time, performance issues associated with data de-duplication can be negated by flash storage, which is much faster – and more expensive – than traditional disk and tape storage.

In a virtualized environment, most data originates in primary storage and is distributed to other storage tiers. Consequently, using data de-duplication to improve primary storage efficiency and optimization can produce significant downstream cost savings across the entire storage infrastructure.

A stronger business case for primary storage de-duplication has led more vendors to offer such solutions. For example, data de-duplication is built into Microsoft Windows Server 2012 R2 for primary storage. Administrators can minimize performance issues by scheduling data de-duplication jobs at specific times and configuring policies to control which files should be processed. Microsoft’s data de-duplication feature promises to provide data integrity, bandwidth efficiency and faster download times.

Let ICG assess the storage requirements of your network and help you determine how your organization could benefit by implementing a data de-duplication solution for backup, primary storage or both.