Follow Policy to Avoid Data Loss

Human Error is a Shortcut to Losing Your Data

Human error is the second leading cause of data loss. Human error ranges from accidental deletion of files and records to ignoring policies regarding data to rebooting systems without proper shutdown procedures. Blind belief and trust in your fellow coworkers to not only follow policy but to not make any mistakes at all are fundamental to using this shortcut to its fullest potential in losing your data.

Avoid Losing Data Through Human Error

There are two fundamental reasons for human error: ignorance and arrogance. Attempting to change human nature is the height of arrogance. People have a tendency to be incredibly poor at following policy. Thus specifying that all “important” data will be stored only on centralized corporate servers and storage tends to fail as soon as a C-level executive loses the data on their notebook. But even when people try their best to follow policy, accidents such as file and record deletion will occur. The best defenses against human error are automation and retention. Automation allows policies and procedures to be created and automatically executed. Retention allows recovery of data even when the data loss isn’t noticed for some period of time.

Retention is one of the fundamental differentiations between backup and simple high availability (which is typically achieved with some type of replication) – high availability handles hardware failure well but does a poor job of handling logical failures such as those caused by human error – because logical failure is simply replicated in highly available systems. Of course, protecting against hardware failure using high availability and against all types of failure using backup is a common technique for protecting data and systems.

Previously, we described why D2D is such an important component of protecting your system. When we discuss any type of logical failure, including human error, another important concept is to protect your data using a superset of D2D which is called D2D2x (Disk-to-Disk-to-Any.) D2D2x simply means that you have longer-term strategies for backups to either on-premise rotational archiving media (disk or tape – although tape has the risks we’ve discussed previously) or to a private or public cloud.

Unitrends

Have you Ever Wanted to Send Large Files by Email ?

Google announced recently that it will be integrating Google Drive into Gmail, a move that will make it possible to send files up to a massive 10GB in size over your email.

 

A new button in the Gmail compose window will give users the ability to attach a file from their Google Drive account instead of attaching the file itself to the message body.

 

Once attached, Gmail will ensure that your recipient has permission to view the file in your Drive account otherwise, the system will prompt you to grant that permission –- and then sends the message.

 

The feature works not only for files you attach to the message, but also for links to items stored in Google Drive you might paste into a message.

 

Since you’re essentially sharing a link to the file in the cloud rather than the file itself, you can continue to update it, with those updates showing up for your recipient as well. You can collaborate on the shared document with the recipient directly in Google Drive, keeping a single copy rather than passing drafts back and forth, further filling your mailbox.

 

Microsoft currently offers a similar feature via Outlook and SkyDrive as well.

 

Each Google Drive user is granted 5GB of free storage from Google. In order to store and send files larger than 5GB users will be required to purchase additional Google Drive storage space to accommodate those files. Currently you can purchase 25GB of additional Google Drive storage for $2.49 per month, or 100GB for $4.99 per month.

 

Hardware Failure: The Right Way to Backup Your System

Hardware failure is the leading cause of data loss; so ignoring hardware failure is the fastest shortcut to losing data. Because you don’t want to lose your data, do not ignore the hardware failure and backup your systems and data.

If you use tape as your backup medium, you could also lose your data. With the high failure rates associated with tape, sooner or later you’re assured that you’re going to need to recover your data and not be able to do so.

SAN or NAS storage devices as the source of the backup and the target of a backup is another highly probable way to losing data. We are not referring to snapshots in between physical transfers of data off the SAN or NAS; we are talking about using your SAN and NAS for primary storage and for backup storage exclusively.

What to do instead

To protect yourself from hardware failure, you have to move your data from primary storage to a completely separate secondary storage. That secondary storage can and should be less expensive than your primary storage, but it has to have RAS (Reliability, Availability, Serviceability) characteristics that are as good or better than your primary storage.

Those requirements rule out tape as well as ruling out partitioned primary storage (SAN or NAS) – although SAN and NAS snapshotting may be used between primary backup protection. The best approach is some type of D2D (Disk-to-Disk) backup. The advantage to D2D backup is that you are using secondary media with higher reliability characteristics than tape while still insuring that you have a physically separate secondary storage set so that you can survive hardware and system failure.

For help with Hardware Failure, contact NPV.com

Unitrends

What Causes Data Loss?

In order to understand shortcuts to losing your data, the first thing we need to do is understand the most common reasons that data is lost.

The primary causes of data loss are:

  • Human failure
  • Human error
  • Software corruption
  • Theft
  • Computer viruses
  • Hardware destruction

The results of the two best studies regarding data loss in the real world are depicted as follows:

Root Cause and the Incident %

Hardware failure – 40%

Human error – 29%

Software corruption  – 13%

Theft – 9%

Computer viruses – 6%

Hardware destruction  -3%

 

Root Cause vs Customer Perception vs Actual Incident %

Hardware or system problem – Customer Perception is 78%, Actual Incident percentage is 56%

Human error – Customer Perception is 11%, Actual Incident percentage is 26%

Software corruption- Customer Perception is 7%, Actual Incident percentage is 9%

Computer viruses – Customer Perception is 2%, Actual Incident percentage is 4%

Natural disasters –  Customer Perception is 1%, Actual Incident percentage is 2%

 

Each of these together forms the foundation for our advice on the most effective path for you to lose

your data. For assistance with data loss, or for help in preventing data loss, contact NPV.com.

 

Unitrends

Backup Data Services and Statistics

No one wants to lose data.  The consequences of data loss are dire; below is a sampling of just a few statistics related to the impact of data loss on business.  Did you know?

  • 93% of companies that lost their data center for 10 days or more due to a disaster, filed for bankruptcy within one year of the disaster. 50% of businesses that found themselves without data management for this same time period filed for bankruptcy immediately. (National Archives & Records Administration in Washington)
  • 94% of companies suffering from a catastrophic data loss do not survive – 43% never reopen and 51% close within two years. (University of Texas)
  • 30% of all businesses that have a major fire go out of business within a year and 70% fail within five years. (Home Office Computing Magazine)
  • 77% of those companies who do test their tape backups found back-up failures. (Boston Computing Network, Data Loss Statistics)
  • 7 out of 10 small firms that experience a major data loss go out of business within a year. (DTI/Price Waterhouse Coopers)
  • 96% of all business workstations are not being backed up. (Contingency Planning and Strategic Research Corporation)
  • 50% of all tape backups fail to restore. (Gartner)
  • 25% of all PC users suffer from data loss each year (Gartner)

Unitrends

Reasons Not to Locate Your Server in a Flood Zone

The following article was of great interest to me.  I only wish I wrote it:

Would you locate your datacenter in a coastal flood zone?

I’m sure there are many fine people working in the Northeast that are no doubt tirelessly scrambling after Hurricane Sandy led to severe flooding and power outages on Monday and Tuesday.

So are the many editors and writers at The Huffington Post, Gawker and Buzzfeed, three large news websites that were down yesterday because significant flooding took servers offline. These publications had to publish content in other locations — HuffPo at corporate parent AOL’s site; Buzzfeed on Tumblr; Gawker’s various properties on liveblogs hosted on subdomains for their sites.

Forgive me, but I’m scratching my head here: why would you host your major, major website solely in low-lying downtown Manhattan? Have we learned nothing of disaster recovery and resiliency?

The image shows New York’s three hurricane evacuation zones. Datagram is in Zone A, described by the city as follows: “Residents in Zone A face the highest risk of flooding from a hurricane’s storm surge. Zone A includes all low-lying coastal areas and other areas that could experience storm surge in ANY hurricane that makes landfall close to New York City.”

I know New York is a major center for many things, including technology, and I don’t mean to kick a guy when he’s down, so to speak. I’m sure the details will surface soon enough — perhaps so many services went down in New York that there was no alternate path to be had.

But I suspect this wasn’t the best strategic decision. No?

Content and image – ZDNet

Cloud Email – Google Apps or Outlook 365?

Office 365 has a lot of potential. It is Microsoft’s second try at dedicated cloud-based email. Office 365 is Microsoft’s answer to the growing threat Google Apps poses to Exchange.

Microsoft is still developing a wide range of Server and Exchange revisions. They are innately a traditional software company, but business is moving to the cloud. Businesses now have to choose between two well-qualified giants. Continue in Microsoft-land with Office 365, or jump ship to the maturing email newcomer that is Google Apps? It’s a tough question to answer, and one that small businesses are approaching NPV with regularly.

Lots of businesses look to these cloud giants solely for email. Here is a breakdown of both so businesses can form their own reasonable opinions.

Both Office 365’s Outlook web client and Google Apps’ Gmail app have various things to offer. One size doesn’t fit all, and there is comfort in the good parts of each. From a purely feature and performance perspective on email clients alone, a few judgments can definitely be made.

The Speed of Office 365 compared to Google Apps

Many business owners don’t want email if they can’t have outlook. It’s usually that they just haven’t seen the capabilities of Google Apps’ Gmail, but also realistically because it’s the only thing they’ve used for years. Either way, Microsoft has crafted the closest clone to what desktop Outlook looks like — without the need for desktop software.

One of the worst aspects of Office 365 email is that in comparison to how the speed of Gmail, O365 is awkwardly slow. For this article two obese email inboxes were compared, each powered by the two distinct email providers.

The Google Apps’ Gmail account showed consistent performance when changing folders (labels for native Gmail speak), responding to messages and working with different aspects of the account in general. Using all browsers, it was slow in many aspects, especially just sifting through the primary inbox area. Even after being fully loaded, it had a tough time just scrolling through over 2,400 emails. Gmail was as fluid with a full inbox as with an empty one.

Everything is not bad in Office 365 Outlook. Some high marks are given in the way menus and settings areas are organized. Compared to Gmail, which tends to feel crammed like a sandwich in some option screens, Office 365 divides settings logically by tasks and affords some cleanliness in overall organization and layout. And for those who are looking for a brisk cloud replacement to desktop Outlook with a small learning curve, Office 365 delivers.

Outlook diehards will find themselves right at home. Also, Google Apps allows for seamless syncing with desktop Outlook if needed.

Gmail’s Strongest Points: Spam Filtering, Speed and Flexibility

One of the benefits of Google’s handling of Apps is the fast-track development path that allows Gmail to evolve at a much faster pace than Office 365 Outlook. There’s no comparing the two when it comes to new features and filling gaps on sought-after needs. Google is definitely in tune with what its users are asking for, and just by skimming their public update feed, you can see that “stale” is far from how to describe Google’s stance on Google Apps.

When it comes to spam filtering, one of the hottest topics in email today, Google Apps’ Gmail is doing an overall better job than Office 365.

That’s not to say Office 365 is bad. It’s leagues better in spam filtering than traditional on-premise Exchange.

Traditional Outlook users know that without some third-party app involved, spam becomes near uncontrollable. Clearly Google Apps is better at spam control but Office 365 is a comfortable second place candidate.

Each Platform takes a Different Approach to Unified Communications

An important aspect of each platform is how they view the topic of unified communications. And each system is starkly different in this area. Microsoft clearly appeals to those who may have on-premise phone systems that are capable of tying into its backend, while Google Apps takes an ala-carte approach in which users can take advantage of as little or as much as they please.

If you’re looking for a seamless connection to your IP phone system via Microsoft’s Lync, then Office 365 will suit your fancy.

Google Apps’ Gmail is an entirely different beast. It offers a very nice integrated chat system via Google Talk that essentially taps into the entire Google Accounts network (meaning Google Apps and Gmail accounts.) If you have a Google+ account tied to your email address, you can chat directly with connected friends in the same interface.

Where Gmail’s “extended” communication functionality shines is in voice and video chat. Right from your browser, you can initiate a Skype-type voice call, or opt for the bells and whistles of video chat through Google Hangouts.

Office 365 Outlook offers a very minimalistic approach to in-browser communications. There is semblance of inter-company chat in Office 365, but it’s very clunky and not half as clean as what Gmail provides.

Gmail offers the Best All-Around package, but Office 365 has its Place

Most businesses looking to make the jump to the cloud will likely find themselves best suited with Google Apps’ Gmail. It was built from the ground up for browser-based email usage, and truly dumps the need for rescinding back to desktop Outlook (unless you truly need it.) Between the benefits it affords around “extended” communications features, spam filtering and overall speed, it’s a clear winner in my book.

— BetaNews

Did you know? – Google Drive Enhanced Platform

In our managed support relationships with clients, we often find the need to provide clients with the ability to work on their documents, anywhere. Consequently, we go through great effort to provide remote access to local file servers. Recently, Google started unrolling an enhanced variation on their Google Docs platform, called Google Drive. If you are familiar with Google Docs, you’ll recall that you could upload limited document types, such as Excel, Doc, Pdf files and share them with groups or individuals. You could also set permissions and privileges by user. With this capability, collaboration took a bold step forward, allowing users to work on the same Excel document, at the same time, in real time. If I change a cell, my peers would see the changes immediately on their browser.

Well Google took this feature to the next level, by extending the ability to upload any document type. In fact, they let you upload almost anything, and have it accessible anywhere that you can open your email. This bold new product is called Google Drive. You can us it in a couple of ways. You can install an app on your computer, that looks just like a folder on your desktop. that new folder will then synchronize anything that you put into it up to your personal cloud folder. if you go someplace and find that you need something from that folder, you simply open your email, click on the ‘drive’ link at the top, and download whatever you need, or if you want to be able to use your files at home, you install the same Google drive app on your computer at home, and all of a sudden, your Google drive files are synchronized. change a file at home, and it’s upload to the cloud, and synchronized to your work computer. Eliminate the need for remote access, and all the cumbersome elements forever. Use it as a backup for your pictures, or anything of value that you never want to loose. Furthermore, you can share any one given file with anyone. some company domains may want to restrict this ability. for example, if your company hosts email with Google apps, you can restrict the ability of staff to share fires to only other individuals in the company. Outside email addresses would not be allowed to receive invitations to view or edit files on your Google drive.

Give it a try today, or call us to learn how we can migrate your corporate email to Google apps today.

Non-Merged It Audit

Introduction

NPV Corporation visited a client on January 24, 2007 to review the organization’s current business use, needs and management of technology. This report provides the findings; immediate, short-term and long-term recommendations; and a rating of the organization based on industry standards and best practices. This was a single site IT assessment. The site was evaluated to better understand the organizations current IT readiness and to assess the organization’s current IT strategy.

We outlined the network infrastructure, including both the local area network and the wide area network of the organization. Throughout the process we analyzed the current infrastructure.

Scope

Throughout this assessment, we examined the following areas:

  • System Inventory for current server setup
  • Hardware / Software
  • Network / Connectivity Audit
  • Sample Desktop Systems Audit

Overview

This client offers the most comprehensive cost-effective insurance coverage. The company has about 15 computer users, and shares an office with another insurance agency. The two agencies share the T1 costs, by splitting the 15 voice lines between both organizations, and sharing the data across both offices.

As a result of individual interviews, and site visit, we uncovered several things about the network that have been very well done. In general, we found that the client has made tremendous progress in overall network architecture, to improve intra-office connectivity.

Read more

How To Resize RAID Partitions (Grow) (Software RAID)

This article describes how you can grow existing software RAID partitions. I have tested this with non-LVM RAID1 partitions that use ext3 as the file system. I will describe this procedure for an intact RAID array.

1. Preliminary Note

The goal of this exercise was to upgrade the drives on the RAID1 array on the file server, without having to move files or re-install a new clean operating system. Essentially, I wanted to swap the drives, and grow the file system.

The current server has (2) 500G SATA drives, making up two raid partitions /dev/md0 (O/S) and /dev/md1 (/home)

[root@waltham ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb3[1] sda3[0]
1931004864 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
20474688 blocks [2/2] [UU]

In summary, I took out the current primary 500G drive, and cloned it onto (2) 2TB drives. The reason for cloning the primary drive, was that the boot sector, is only written to the primary drive. That way, both clones would have a copy of the boot sector, in case that part of the disk is ever corrupted.

In a software raid, only the primary drive retains a copy of the boot sector. I learned this the hard way.

Once both drives were cloned with CLONEZILLA, I took out the old drives, and put in the two new cloned drives, and booted the system. Following are the detailed steps in the process.

Once, I rebooted the system with the two new 2TB drives, the system recognized that the drives were members of an array, but it would not re-establish the array, as you can see.

[root@waltham ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda3[0]
1931004864 blocks [1/2] [_U]

md0 : active raid1 sda1[0]
20474688 blocks [1/2] [_U]

The primary disk came online, but the other one did not.

Know that the data was intact, since one of the drives booted up fine, I ran fdisk on the drive that did not come up.

Fdisk allows me to delete the current 490G partition sdb3, and re-create it using the MAX allowed space. That way, when I recreated the partition, it was now almost 2TB. I then added the partitions to their respective arrays.

mdadm /dev/md0 –add /dev/sdb1
mdadm /dev/md1 –add /dev/sdb3

/dev/sdb2 and /dev/sda2 are swap partitions.

Once this was done, the array started to re-create itself. You can see the progress by typing the following command

cat /proc/mdstat

Once the mirroring completed, I took /dev/sda off, the array, and ran fdisk on /dev/sda3 in order to re-size it to the full size of the disk.

After that was done, you need to re-add the new partition to the array in order for the imaging to start again, on the new (bigger) partition. Since /dev/md1 is still defined at 500G, we need to take the following steps before proceeding.

2 Intact Array

I will describe how to resize the array /dev/md1, made up of /dev/sda3 and /dev/sdb3.

2.1 Growing An Intact Array

Boot into into single user mode. When the GRUB loader comes up, hit ‘e’ for ‘edit’ and select the first boot command, select ‘e’ again, and add the word ‘single’ to the command string, then hit ‘b’ to continue the boot process. At the hash prompt, you will need to unmounts the array that you wish to grow.

umount /home

Then activate your RAID arrays:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

mdadm -A –scan

Now we can grow /dev/md1 as follows:

mdadm –grow /dev/md1 –size=max

–size=max means the largest possible value. You can as well specify a size in KiBytes (see previous chapter).

Then we run a file system check…

e2fsck -f /dev/md1

…, resize the file system…

resize2fs /dev/md1

… and check the file system again:

e2fsck -f /dev/md1

Afterwards you can boot back into your normal system, and you should have a new filesystem, as you can see with the full size of your grown space.

[root@waltham ~]# df -H
Filesystem Size Used Avail Use% Mounted on
/dev/md0 21G 5.6G 14G 29%  /
tmpfs 4.1G 0 4.1G 0% /dev/shm
/dev/md1 2.0T 259G 1.6T 15% /home