Hadoop’s new release could have transformative impact

Posted by Justin Hesser on January 21, 2014

When it comes to Big Data, the only real constant is change. The field is always evolving in new and interesting ways, as businesses try to figure out how best they can leverage the massive amount of information currently available to them. Indeed, one of the primary concerns of Filemaker consultants is staying abreast of new best practices and how to deploy them. 

Hadoop's release of the second version of its powerful analytics tool is the sort of update that could very well alter the way businesses approach Big Data. The 2.0 version, which became available in October, supplements cloud-based information storage with the sort of on-premise collection that could lead to further discovery in the future. This new version also frees the software from the need to process in batches, and allows it to work almost in real-time. 

One of the reasons that Hadoop 2.0 is so potentially exciting is that it dovetails neatly with the logic behind the Big Data stack. In addition to the initial technology, developers will be able to build out from it to make even more powerful software. Merv Adrian, an analyst with Garnter, Inc., described the potential of this process. 

"As people gain experience, we expect them to build larger projects," Adrian said during a recent webinar. 

This newest Hadoop iteration isn't without its flaws. The security protocols aren't yet perfect, especially because the data is being populated from public bases. This potential for risk is part of why privacy is already a big concern this year, and looks to be of high interest for the company's foreseeable future.