Hadoop in Big Data.
Apache Hadoop is becoming the de facto infrastructure environment for pushing data across a distributed infrastructure to then later analyze with MapReduce in an effort to optimize web pages, personalize content or increase the effectiveness of online advertising.
There’s just one problem.
Hadoop is not meant as a storage environment. Metadata is kept on one server. Data is replicated three times to make sure nothing gets lost. That can get expensive when the data store is petabytes in size. And if there’s a failure on the metadata server, the replicated data can become entirely inaccessible. Further, maintaining three copies of data can lead to significant overhead and management costs.
Cleversafe believes it has the answer by combining its object-storage dispersal technology with the capabilities of Hadoop, MapReduce.
Cleversafe uses a technique called erasure coding. It take data and slices it into little pieces. The slices are distributed to separate…
View original post 362 more words