Q&A: (Almost) Everything You Want to Know About the DataGravity Discovery Series

It’s been one heck of an awesome ride coming out of stealth, winning the Best of VMworld award and shipping our first units to customers. As this is my first contribution to the blog, I feel an introduction is in order as you may be asking, “Who’s this Will guy?”

My role here at DataGravity is that of technical marketing manager (that’s me with the glasses giving a demo of Discovery Series at VMworld in August). I’ve spent more than 13 years on the vendor side of enterprise IT, doing everything from tech support and break fix; installation and professional services; and everything in between. For the past six years or so I’ve been focused on creating technical content that helps customers get the most out of their solutions. I joined DataGravity in July because after one look, I knew this was an opportunity to join a team that is fundamentally changing the landscape of storage.

Over the coming months, we’ll dive into all the cool things that the DataGravity Discovery Series can do. However, if you can’t wait to learn more, I run weekly online demos of DataGravity Discovery Series to show participants the types of insights they can find through the platform, and answer questions what this technology can bring to their organizations. We’re hearing great questions during these sessions, and I’ll be making sure to share them and our answers here on the blog.

Below are some Q&As discussed during recent demos:

Q: It appears that the Discovery Series intelligence node offers many of the same features as a targeted backup appliance, for example, deduplication of backup and snapshot data. In addition, it appears that the DataGravity solution offers a catalog of backups that are searchable. Is this true?
A: This is a pretty good analysis. The data protection discovery points are stored on a fault-isolated set of disks away from the primary storage, the virtual machines (VMs) are deduped and everything is compressed. However, they are not objects; they are still the same files as on the primary side, so there’s no extraction of the object level to get to it. Combined with the indexing for search and discovery and the catalog, the Discovery Series allows you to see what has changed when and also search for it to recover it.  If you think about traditional snapshots, they do a good job of keeping track of what changes, but they don’t actually tell you what has changed.  Being able to not only show the changes but allow you to see the changes gives these DiscoveryPoints the efficiency of snapshots but with the usability of a backup.

Q: In regards to the user searchable snapshots and indexing, does this happen via an app or the user’s Internet? It looks like an HTML5 tab. Is this similar to a search appliance?  
A: That’s an apt analogy. The end-user capabilities are done within the same HTML 5 Web interface. In the demo, I use incognito mode to make sure the cookies for admin/user don’t cross over, but users can do this from their phones, tablets or desktops. The access is all based on their AD credentials, but since the data is indexed during the actual HA stream, there is no nightly crawl necessary to look for changes and index them. Therefore, there aren’t any performance hits or time delays for finding information.

Q: The audit, compliance and PII capability looks like it offers much of the capability of some other purpose-built software suites. Is this accurate? Does DataGravity Discovery Series also include quota management?
A: There are many full-featured suites available that provide capabilities above and beyond our current offering. That being said, using one of these top-shelf audit, compliance and PII tools may also be unattainable for many IT shops as they can be pricey solutions that require a separate SLA and a separate admin (or individual skill set) to fully manage and optimize. The DataGravity audit, compliance and PII capability comes as part of the all-inclusive Discovery Series software. Currently, there is no quota management, but users get reports on storage usage, so they can identify their storage outliers. Being part of the I/O stream also gives us a lot of insight that many applications that crawl afterward cannot get. Of course, we’ll be adding more features as time goes on.

Q: How long can I set my retention policy for with Discovery Series? If my storage needs grow beyond the capacity of a single device, what’s next? In particular, if our backup retention grows, can we expand just this pool of storage? 
A: With DataGravity, the retention policy can be pretty much anything you want it to be. You can do every hour keep 24, and do another one that is every week keep 52, etc. The policy will be based on the space available in your storage. In the first version of Discovery Series, we have 48TB and 96TB (raw) offerings. As far as the primary intelligence pools go, they are dynamic, so they will automatically grow with you as you need the space.

Do you have other questions for us? Register for the next DataGravity Discovery Series demo here.

  Like This

Will Urban

With more than 13 years in Enterprise servers, storage, and virtualization, Will loves to share his enthusiasm about cool new technology with customers and partners. Will has been a speaker at various trade shows and user conferences and is Technical Marketing Manager for DataGravity. He is a husband and proud father and is an avid football fan and gaming enthusiast in his spare time.