You Got Big Data! Where’s the BIG ROI?

The Big Data ROI Conundrum
Let’s face it: Big data is not easy nor does it come in cheap. Big Data has proven to be one of the most difficult conundrums for experts not only on the technical side but also on justifying its ROI. But it doesn’t have to be this. Most companies justify that as an investment to the future, something that has to be done to stay competitive whether its value will be proven or not in the near future. In other words, it is the new buzzword tech trend as concepts such as “BI”, “mobile”, “cloud” start to become commodity items.
Let’s take a moment and analyze the process of implementation of a typical Big Data infrastructure.

We can divide Big Data implementations into 3 major phases:
1. Storage, replication and retrieval.
2. Processing data.(structured and unstructured)
3. Intelligence extraction.

BIGDATAROI_1

Storage, Replication & Retrieval
Most organizations learn the hard way how expensive and time consuming it is just to finish the storage layer of the architecture. On the storage layer, there is the hardware and core software to provide the data warehouse capabilities, replication and retrieval of data. This is the stage where companies deploy their distributed file storage system like Hadoop, their document stores or key value stores or even graph NoSQL systems to provide graphical representation of data correlations. There are many vendors supporting the main open source technologies as well as commercial applications built to facilitate management of the ecosystem where the data warehouse will reside. The choices are plenty. Unfortunately, the cost and mostly time to implement is high. On top of that executives enthused about “Big datafying the stack” are disappointed when they learn that after all the time and energy spent in creating an unstructured and structured data repository, they still haven’t seen a quantifiable return on investment or that the customers still don’t want to pay extra until the value is explicable in simple and articulate terms.

BIGDATAROI_2

Processing Stored Data
After the first phase comes the phase commonly referred to as “Beyond Hadoop” phase. This is when you understand that now with all that data there is the need to actually processing it, organizing it, querying it. This is the stage where people start writing map-reduce jobs, ETL systems to try to transform data (to known relational databases), organize, query, search and make sense of that data. Again, not an easy process which demands time, investments and scarce specialized personnel.

BIGDATAROI_3

Extracting Insights
Finally we come to the last piece of the architecture: intelligence extraction. Here is where the “Text Analytics” or “Data Analytics”, “Machine Learning” solutions come into play. After going through the time and expenses of implementation of the storage layer and the processing layer, it is now time to see how to make sense of that data, especially on how to unlock value from unstructured data. That can be any type of text from properly written news documents, to user generated social media data, emails, notes, survey responses, chats, support tickets, or phone call transcripts.

After all the investment put on the first 2 layers, this layer turns out to be the iceberg under the “tip”. In order to unlock the value of that data, it is imperative that the data be analyzed through some sort of a text analytics solution. Typically, either a company outsources that to a third party Text Analytics consulting vendor or tries to develop that in house. Both of which are expensive options.

The current crop of text analytics solutions on the market are not plug and play. No matter how flashy the demos and data visualization dash boards or how tempting the promises the truth is: more time and cost to implement them. Why? Well, out of the box, those tools are not ready to produce value. They are based on statistical Machine Learning and Natural Language commodity algorithms. Yes they can help and give you basic sentiments, classifications, key NERs, but they cannot unlock the intelligence out of the data by themselves. In other words:

Conventional Text and Sentiment Analysis NLP based tools require supervised training for domain specific data because they rely on language models and industry domains. The time to train these systems and to maintain them becomes an ongoing tax.

This is when the Text Analytics companies come in the disguise of rescue providing their “professional services” to add to the mix of technologies you have already acquired. Worse even, what if you have to deal with data presented in different languages from different markets in the world where your company has presence? The contemporary solution is: again train those tools to that given language. As you can see this last phase seems to be a never ending phase. It’s like buying a car that needs an oil change every other day.

So, how do you fix this never ending ROI conundrum?
OmnIQuo provides its A.I. based cognitive computing technology, which unlike the Text Analytics solutions out there, require no training. Omniquo’s platform provides access via API’s and cloud based services so that your organization needs not go through the elaborate task of setting up all the infrastructure. Yes, omnIQuo’s tools are plug & play out the box. Not only is data from any domain, in any format (news, web sites, surveys, blogs, email, documents, and user generated content) fed to it right away but it is also processed real time. No soar taste surprises, no catches and no disappointments.

That said you can take advantage of Omniquo’s cloud based services even if you have already setup your BIG DATA infrastructure i.e. whether you are in pre phase-1 or post phase-2. If you have already invested in your infrastructure, then you can start justifying its value and get a quick return on your investment by harvesting the intelligence out of your data using the API’s and pre built services.
Omniquo’s Deep Meaning Insights  technology does not require you to program scripts to catch information. On top of that, Omniquo’sadaptive learning  narrative and events extraction engine can extract insights from data even without pre-defined template frames thereby enabling organizations to gain answers to questions that were not discoverable questions to begin with.

 

What’s so special about Omniquo Language Interpretation Engine?
Communication via language serves the need to convey a message, an idea, an abstract concept, an action, an event, report a fact among other things. That is why traditional statistical machine learning models are limited as they cannot model all the intricacies of communication. Most Text Analytics solutions in the market are built on the same DNA provided by Open NLP or Stanford NLP Projects.  These projects were trained on standard language data and hence they fail to evolve as data grows BIG.  OmnIQuo’s proprietary and differentiated approach is to use the intelligence already embedded in language instead of relying on models. The language is merely a protocol and several human languages follow a similar protocol. That is why, the type of language (English, French, German, etc.) just becomes one of the layers of the protocol. Once you are able to extract the intelligence out of the protocol the ‘language’ itself that transmitted the protocol does not matter.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published.