I’d like to start by stating that this was just a teeny error, a simple oversight that could happen to absolutely anyone. If you love stories about Microsoft Excel, get excited! Or just hang with me for a few sentences and I will reward you with a great opportunity to laugh at my error, which again we’ve all agreed was minuscule.

 

I’m fortunate to have the opportunity to be invited to brainstorm sessions with my mentor, who also happens to be my CEO. During this particular session, I was tasked with flushing out a simple and effective formula to correlate the products of datasets throughout a workbook we’d been working on. To be honest, I was feeling rather optimistic and felt I knew more or less how to write something that would accomplish the task at hand.

 

It’s almost funny how quickly reality sets in when you hit the “enter” key on a formula you’ve just written and Excel boldly declares that your product is “FALSE”.  Now, had I been solving for a “True” or “False” this would have been terrific, but this particular Newtonian masterpiece was designed to provide a numerical value, of which “False” is decidedly not. I raced back through my formula, and subsequently every other formula that my useless jumble of code had derived data from, to find no errors.

 

As my CEO peered over my shoulder, presumably questioning his previously brilliant decision to hire me, it hit me like a ton of bricks; my formula was based on what my eyes saw, rather than what the actual value was. The value I was reading, 75.38%, fell in the parameters of the data set I was solving for, but you know what didn’t fall in that data set? .7538. This simple oversight is why my brilliant formula was reading FALSE, rather than producing the grade the formula was designed to show.

kells

The reality is that I made an error, and had it not been for the fail-safe, it could have cost us money. I managed to not account for the fact that the “number” I was reading from an Excel sheet was represented as something that, numerically, it was not. Simple enough for anyone to do, I mean really simple like I’m sure it happens to everyone…

I am lucky to regularly work with a brilliant group of people who just happen to clean, manage, and normalize complex sets of data. Their lives are dedicated to ensuring the consistency and accuracy of each. I made a simple human error, which compromised my data. I was fortunate that the tolerance of our data sets was tight, so anything outside of the correct answer would provide a “FALSE”. Long story short, it helps to have procedures in place to ensure that the data you’re using to make important decisions, hasn’t been compromised.

 

We are fortunate to conduct business at a time where many of our predecessors have already implemented SOPs to limit many errors, but as technology inevitably advances we are faced with new potential pitfalls as well as the resurfacing of issues from bygone eras. We hire the best people, use the best programs, replaced the punch cards with fingerprint scanners, Rolodexes with CRMs, and work harder and smarter than ever before. The only part of this equation that creates a problem is that at the end of the day, we’re all human.

 

Terms like data breaches can force us to look at our processes, but they aren’t the only threat to a company’s data. The steps we take to keep the data our teams collect and input from being corrupted or inaccurate is something that is commonly overlooked. Everything from customer information to payroll and accounting is susceptible to human error, and it’s imperative that we have processes and fail-safes in place to keep these errors from damaging our bottom lines. Just remember, it’s important to make sure we have multiple processes in place to ensure our data is clean and accurate; all it takes is a ring finger inadvertently hitting delete to take us from profit to loss.