Lessons from Xconomy Enterprise Tech: data classification matters

I recently went to the Xconomy Enterprise Tech event with several of my colleagues. Perhaps the most interesting panel I saw was on Cyber Insecurity, featuring Tas Giakouminakis, the CTO of Rapid7, Christopher Ahlberg, CEO of Recorded Future, Michael Daly, CTO of Raytheon Cybersecurity and Scott Montgomery, Chief Technical Strategist of Intel Security.

Thematically, one key trend that popped up throughout the session was data classification. Each of the experts agreed that not all data has to be protected equally, that as all data can generally be divided into two categories: important data and regular data. Today, however, the majority of companies are generally poor at classifying their data. This means that they must protect all data as if it is the most important data they have. Not only is this hard, it’s nearly impossible.

Establishing a secure environment is a continuously moving target. As time goes on, external threats may grow in sophistication and the most motivated attacker eventually outsmarts security systems. Once a security flaw is exposed, new approaches and systems are created to address known vulnerabilities; these new systems are implemented until they, too, are replaced by more robust subsequent systems. Our customers have shared a number of stories with me about how they have evolved their systems and processes to keep pace with changing threats. The more data a security system is tasked with protecting, the more potential vulnerabilities it exposes. As a rule, the larger your focus, the less depth your focus on any one element can be.

The panelists emphasized that the most logical response to evolving security threats is to better focus your security efforts on protecting that which is most important to you. If it were possible to classify which data are most important, one could focus effort on a smaller attackable surface area and apply less scrutiny to less important information (e.g. personal photos, mp3s). Not all data requires protection: some data is much more important. In fact, many of our customers have discovered dormant data that could be archived or deleted, freeing up valuable storage space and allowing them to focus on protecting the information of greatest value to the business.

The reality is that it’s incredibly difficult to categorize your data— Giakouminakis and Daly explained the key challenges. Some classification systems rely on manual tagging, which is human generated and difficult to manage. Additionally, using such a system can get in the way of end users simply doing their jobs—something our customers understandably want to avoid. Other systems that attempt to auto-categorize tend to use tremendous infrastructure and computing resources that impact performance in a way that is detrimental to daily work.

The panel members agreed that the ideal solution is a platform that allows you to proactively filter and categorize data without adversely affecting existing workflow. Instead of requiring end users to tag content or dedicating compute resources to pour through data, tools that can automatically categorize content based off of metadata or content within any file, in real time, offer the most elegant solution.

DataGravity does this by combining security software and a storage array. This design stores the primary data and an indexed copy of the data, which is used for tagging files performing instantaneous searches. This way, an organization can create classification rules against the content within the files themselves as said files are moved onto the array; you can find and organize your most important data without any end user needing to do anything. This automated method offers a highly accurate and scalable way to classify your data; our customers who are using this feature appreciate the flexibility it offers.

Companies which have a data classification system that doesn’t require major infrastructure resources and or a drastic change of behavior are better able to organize their security efforts against data importance. The experts at Xconomy made a compelling case that this approach, compared to companies who rely on human-generated data classification, is better. Machine-driven classification is more thorough and reliable than people. Reliable and thorough classification means a more focused, concentrated security posture can be built that secures different types of data based on their significance. Overall, this offers a more cost-effective and easier to manage security strategy.

Think about your data classification activities. Does your organization currently classify its data? How does it do it? Do you have a system in place that allows you to accurately determine which data is important and which is not? What about your security efforts – do you have security vulnerabilities? Do you find it hard to appropriately protect all your data in all the places it lives and ways it can be accessed?

Learn more about how DataGravity can help you define, detect and defend your sensitive data—download the solution brief.

  Like This
Jake Cohen

Jake Cohen

Jake is the Director of Customer Marketing at DataGravity. He is responsible for working with customers to collect and share insights, tips and hysterical jokes. His career has been spent in a variety of sales and marketing roles, which he uses now to help evangelize our customers and showcase all the value DataGravity offers.