In the past twenty years, digital forensics has evolved, from a reactionary set of loosely repeatable steps to a mature, documented, easily repeatable set of processes and procedures, often governed by an up-to-date corporate Acceptable Use Policy (“AUP”). Early on in this evolution, we solved most of the glaring problems. These included clearly defining what employees could and could not do how to preserve the chain of custody, or even how to make a forensically sound copy of a user’s hard drive. We not only had to figure out how to physically accomplish these tasks, but we also had to establish precedent. No one would argue that an explicit photo calendar hung up on an office wall was completely inappropriate, but what were the rules when an employee was visiting inappropriate sites? A swimsuit calendar on the wall could be seen by anyone walking by, but web searches are far more private.
Three times a charm?
Early on we had mainframes. While there was only so much that users could do, there were still a number of risks that we needed to mitigate. We needed to get everyone to use unique userids and not a shared user account. Without it there could be no accountability. We instituted periodic access reviews, to ensure that people only had the access that they needed, and that people who left the company or department no longer had active accounts.
Once we got all that figured out, they changed the rules on us again. Dumb terminals and centralized processing were replaced by the client server model. Now we had a whole new set of things to worry about. Data no longer stayed on mainframe DASD, but rather a hybrid approach took form. Some data would remain on servers, and other data would be stored locally on user’s workstations. Now we had to find ways to track what was done on each workstation. This was akin to the Industrial Revolution, but for PC’s. We learned how to use tools like Tableau Imager, Encase,and FTKto make a forensically sound copy of a hard drive and to peek into what secrets we could find in file slack. (File slack refers to the leftover artifacts from previous, partially-overwritten files.) We needed
Now everything is mobile, and the old perimeter means nothing. Laptops with SSD’s don’t have file slack like the spinning hard drives did. Workers may do a portion on their work on their personal iPad. Half of a person’s email will be managed on their phone, not their desktop. Many of your company’s legacy client server apps are being replaced with cloud solutions accessed from a web browser. Who knows what the future will bring? The only guarantee we have is that future change will be rapid and disruptive, leaving IT departments to figure out a way to make it all work and let the business continue to run.
Don’t throw the Baby out with the bath water
Just because our footprint has expanded, it does not mean that all of our previous efforts are useless. While not everything is done from a user’s desktop anymore, that doesn’t mean we should drop the procedure of forensic analysis of user desktops when needed. Also keep in mind that you don’t have to get it perfect from day one. This is a case where you want to remember to not let perfection get in the way of good enough – a well-planned, fully implemented, 75% solution always trumps a perfect solution stuck in the planning stage. Once you have a plan implemented, use a continual improvement process to start to identify gaps and conquer the more difficult aspects. The sign of a good program is one that can continually evolve as the landscape changes.
What you need to start doing today
So what are we supposed to do? Start with the basics. You still need an Acceptable Use Policy, reviewed annually, that gets communicated out to all users. This draws a line in the sand on what is allowed and what isn’t. As user habits change and technology evolves, a policy that is reviewed annually will be better equipped to handle these new changes. Once you have a clearly defined strategy, all technical and operational controls can be molded around that policy.
Once all of your conventional controls are in place, it’s time to identify gaps. What should we do about cloud services? What about user-controlled virtual machines? First, figure out the actual gap. “I am concerned about cloud services because a use can connect to a cloud service that I don’t know about.”
The next step is to determine the probability and risk. Is this something that will happen often, or once in a blue moon? Will it be a slight inconvenience, or will it threaten the financial well-being of the company? You need to properly assess the risk in order to make informed decisions.
The third step is to look for a solution. If you’re worried about cloud services and shadow IT, you need better visibility on what people are doing. In order to have a proper inventory of users, services, and applications, you may want to look into a Cloud Access Security Broker, (“CASB”). A CASB will give you insight on what services your employees are using every day. Then move on to the next biggest gap, and repeat until satisfied with your residual risk level.
Don’t look for one solution to solve all of your gaps. Figure out what is allowed, and let everyone else know. Roll out a plan that gets most of what you want. Work to continually improve your security stance by identifying the biggest risks and implementing new controls. This will put you in a much better place than just hand wringing and admiring the problem. That is, until the next big disruptor comes along…