Wednesday, May 6, 2015

Hadoop Meetup on the sidelines of Strata Hadoop Conference - Part 2

Read part 1 of this here

Day 2 of the meetup was equally exciting, if not better.  Lined up were talks from Qubit and Google, William Hill (a surprise for me - more later on that) and then PostCodeAnywhere, all very exciting from the synopsis.

Google & Qubit showcased basically a stream processing engine, with pluggable components, many of them can be written in different technologies and programming languages.

Of course Google Cloud Data flow is much more than just a stream processing engine, however, for real time data ingestion perspective, that feature is pretty significant.  

A completely managed system, it woks on the publish-subscribe (pub-sub) model.  As Reza put it, “pub-sub is not just data delivery mechanism, its used as a glue to hold the complete system together”.  Pluggable components is another differentiator for Google’s offering, in today’s demo they showcased bigtable as one of the consumers at the end.

From my own knowledge of stream processing, which is not significant in anyway, i could relate to many similarities with IBM’s info sphere streams and some with apache kafka.  However, a question around comparisons with these sites remained unanswered from Google (though in very good spirit, in a chat with the speaker Reza later on, it came out as more of a philosophical question avoidance than anything else).

The william hill talk (by Peter Morgan, their head of engineering), was a genuine surprise, at least for me.  Perhaps due to my ignorance, due to which i didn't realize, their systems are far more sophisticated and load bearing than I would have imagined.  As an example, they process 160TB of data through their systems on a daily basis.

Including many complexities managed through their system are their main components, the betting engine, the settlement engine among others. 

William Hill supports an open API as well, enabling app developers to pick up data elements and innovate. However, for obvious reasons, very limited data is thrown open in the public domain.  Would that be a deterrent for app developers ? not having enough data ?   For example, if i would want to report in an app, who’s betting on a  certain game, cross referenced with geo location data .. I cant do that, since William hill doesn't publish demographic data.  I personally feel alright with it, there are possibilities that many of those data elements can be used in ways to influence the betting system itself, becoming counter-productive.

I would imagine their IT systems to be one of the top notch systems around the place, to be able to manage such data volumes, with such speeds and accuracy. Commendable job.  I would probably write exclusively on their architecture once i get my hands on the presentation slides (couple of days may be).

The talk from PostCodeAnywhere was more educative to me, personally.  Got to understand a bit about Markov Models, chains and how they can be used for machine Learning.  Very interesting stuff there too.

Apache Spark is being seen more and more as the tool to be perform analytics on the fly, specially on large volumes of data.  It would be very interesting to see how R and python analytical capabilities compare with what spark offers.

Speaking to another attendee today, it came out the people prefer to use R more and more for massaging and cleansing purposes, however, its not seen as fit for heavy lifting required for performing real analytic and/or predictive pieces. For these areas, people still prefer to use Python.


IBM’s bigR is a possible contender for the job, where they talk about having optimised R for a hadoop cluster and have enabled it to work on top of hdfs.  However, bigR is not open source and that could be its biggest challenge in adoption.

No comments:

Post a Comment