Omniquo Announces Strategic Partnership with Frost & Sullivan

 

Frost & Sullivan to be go to market partner for Omniquo’s artificial intelligence software suite

August 2, 2017 – Omniquo is proud to announce that it has entered into a strategic partnership agreement with Frost & Sullivan. As part of this agreement, Frost & Sullivan’s will offer Omniquo’s artificial intelligence software suite for sales, marketing, strategic insights and automation through its channel worldwide. The combined suite with Frost and Sullivan’s GIL leadership offers a differentiated solution set to customers worldwide.

This partnership enables Frost & Sullivan to bring the next generation AI technologies to its customers as well as add new revenue stream to its business.

“We are delighted that Frost & Sullivan chose to partner with Omniquo. This agreement affirms Omniquo leadership as a premium provider enterprise intelligence and automation software applications to growth oriented businesses” said Sunny Gosain, Chairman & CEO of Omniquo.

“Omniquo’s applications coupled with Frost & Sullivan’s unique approach to strategic insights offers a world class business operating system to C-level executives worldwide. This partnership accelerates the benefits of digital transformation, competitive demand generation and strategic insights to our global client base” said David Frigstad, Chairman of Frost & Sullivan.

About Omniquo
Omniquo provides artificial intelligence software suite that powers the functions of Sales, Marketing, Service and transactional workflow. Omniquo’s differentiated IP in deep domain conversational A.I.  understands meaning from content such as emails, chat, SMS, documents, web, social media and powers the company’s Intelligent Agent and dark data insights product lines.  Omniquo’s customers range from Fortune 500 to startups that use the companies ‘intelligent agents’ to drive intent based automation.

 

About Frost & Sullivan

Frost & Sullivan, the Growth Partnership Company, works in collaboration with clients to leverage visionary innovation that addresses the global challenges and related growth opportunities that will make or break today’s market participants. For more than 50 years, we have been developing growth strategies for the global 1000, emerging businesses, the public sector and the investment community.

 

Omniquo Contact: Anand Nukala

Title: VP of Business Development & Product Marketing

Email: anukala@omniquo.com

phone: 425-633-0592

Omniquo In News

Discovering the Value “Inside the Conversation”

“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”

                                                                                                                        Albert Einstein

In my previous post I discussed the fact that in the worlds of marketing and sales, structured data is dwarfed by the vast amount of unstructured data that exists. Research shows that at least 80% of all data that exists today is unstructured. (Help me determine the real number as it relates to sales. Take the 5-minute survey here).

Structured data is represented in fields and values in your marketing automation, CRM and other sales and marketing systems. Since this data exists in a structure, it can be reported, analyzed and clearly presented by numerous tools.

Unstructured data is contained in emails, conversations, documents, chats, text messages and social interactions. It is the data “inside the conversation”. Today, some of this unstructured data may be captured, but most isn’t. Even the data that is captured is not analyzed or acted upon because analyzing it requires human interpretation; someone has to read it; someone has to listen to it. We just don’t have the time to do that; therefore we discard the value associated with 80% of the data that exists! What a waste.

Solutions are beginning to emerge that analyze these massive amounts of rich unstructured data. The most well known is IBM Watson. Watson uses natural language processing (NLP) and machine learning to reveal insights from large amounts of unstructured data.[1]

The difficulty with systems like Watson is the complexity of implementation. These NLP systems typically require vast amounts of training to be able to understand and interpret unstructured data. The training involves context of the industry, the market, the vocabulary (dictionary) and the specific problem being solved. For example, training the system to understand symptoms and indicators of certain diseases to assist a doctor in diagnosis. Often the project of training an NLP system involves person-years of effort by expensive data and linguistic scientists…even when aided by machine learning. These projects require huge infrastructure costs due to dictionary models that are difficult to scale. Therefore, solutions involving NLP have been out of reach for small and medium businesses due to the sheer cost, questionable accuracy and complexity of implementation and maintenance.

However, that paradigm is changing rapidly. New advances occurring in the domain of deep meaning analysis promise a world where NLP systems require minimal to no training. This means effortless and rapid, (and inexpensive) implementation. Finally, access to the hidden value inside the conversation will be available to everyone.

This change is coming soon, so now is the time to prepare. In my next post I’ll cover a few steps that you can take now to be ready for the not-to-distant future!

[1] http://www.ibm.com/smarterplanet/us/en/ibmwatson/what-is-watson.html

What Do You Really Know?

One of my favorite movie quotes of all time is from Men in Black when K is speaking to James Edwards shortly after J’s encounter with Jeebs.

“Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you’ll know tomorrow.”

What do we really know about what is happening in our marketing sales process? Do we have access to the real meaning of the interactions and conversations taking place between our company and our prospects and customers? The key to being able to take meaningful action is to be able to understand not only what happened, but why.

In my previous post I introduced the concept of structured vs. unstructured data. Here’s a quick review of the definitions each:

  • Structured data exists in fields and field values inside your marking and sales systems; what we think of as a traditional database.
  • Unstructured data is contained in emails, conversations, documents, chats, text messages, social interactions and even pictures and videos.

As marketing and sales professionals we have become very sophisticated in our ability to analyze structured data. New SaaS applications have arisen providing solutions to some of the most difficult problems such as marketing attribution analysis and “movement and change”[1] in the sales process. Yet all this analysis of our structured data is still hamstrung by three major weaknesses:

  1. It is by its very nature flawed because much of the structured data relies on human interpretation and recording of the “facts”, and we are all familiar with the constant struggle of data input and accuracy.
  2. Structured data represents at best 20% of all the data[2] associated with our interactions with our customers and prospects
  3. Structured data tells us only the “what”… what happened (results) or what we think will happen (forecasts). It does absolutely nothing to help us understand “why”. Why did that deal not close? Why is our churn rate increasing? Why are our conversion rates what they are?

The “why” is only understood when we review the actual conversations between our company and our prospects and customers (think of what happens in a deal review meeting).  If you are a manager, imagine what you would know if you could listen to every conversation, read every email, review every profile, read every document and article related to the prospect’s company, see every tweet. If you are a rep, imagine what you could know if you could do the same for every one of your colleagues’ interactions with their prospects and customers.

The possibility of doing this may seem like science fiction. I propose it’s not, and that we’ll begin to gain access to this rich knowledge in 2016! Stay tuned for more.

[1] See Shape and Velocity of the sales funnel

[2] IBM claim

Shape and Velocity of the Sales Funnel

This article was written by Stu Schmidt in 2003. These were the heady days of Salesforce.com expanding beyond a single floor at One Market; of Upshot being acquired by Siebel; of the emergence of the science of sales process.

 

The Art of Measuring Sales

by Stu Schmidt

Quotas have gone up again this year. Resources have been cut yet again – the company needs us to do more with less. The competition is fierce, and our buyers are only spending on solutions they must have now. The plans for equipping the sales force have been made and implementation of training, the compensation plan, and methodology are all underway. We’ve committed the time, money and resources to improving our sales productivity, but how do we know that the plans we’ve implemented are creating the result we want? If we wait until the results are in, it will be too late. Even when we do see the results, how will we know if the result related to what we did, or to some other external factor?

In sales, this scenario is all too common. When we make investments in programs that should impact sales productivity and effectiveness, we often do it based on gut feel and experience, hoping that at the end of the day, we’ve done the right thing. We don’t have visibility to the effect our efforts are having until the end of the game. Unlike almost every other function in the enterprise, sales does not have a closed-loop management system that allows us to measure, manage and optimize our sales process – while it’s happening.

This is the first in a series of articles dedicated to helping you create an effective way to measure, manage, and optimize your sales process. My goal is to help you increase your visibility to the actions you can take to immediately impact performance and dramatically improve your forward visibility.

As a sales manager you know the importance of measuring the performance of your sales force. The questions are; what are you measuring? Are you measuring the right things? Are you measuring them at the right frequency? Or is all this measurements stuff just a colossal waste of time?

What, more measurements?

On the surface it feels like that the last thing we need are more measurements. We’re already either measuring, or being measured, on a myriad of data points. It seems that every time the CEO or CFO walks into our office, we end up with more things we have to measure. Do any of these sound familiar?

  • Results against quota
  • Discount percentages
  • Average selling price
  • Cost of sales
  • Booking to revenue conversion ratio
  • Booking linearity
  • Predictability
  • Forecast accuracy
  • Gross margin
  • Conversion rates by forecast category
  • Expenses
  • Accounts receivable

In addition, we need to produce all these by territory, by product family, and by revenue type. In fact, we could (and I’m sure you know some sales managers who have) make a career out of crunching the numbers. Where are we going to get the time to actually do something that will impact the numbers, not just report them? After all, that’s really what it’s all about, isn’t it… positively impacting the numbers?

Why, after spending so much time measuring, is it not obvious what action we can take to impact the numbers? Have the measurements told us anything that we didn’t already know? Have the measurements offered some insight or visibility to why the results are what they are?

Here’s a sobering thought – might we be measuring the wrong things? All of the measurements listed above share one common characteristic. Go back and take another look at the list and see if you can pick it out. No, it’s not that they were all invented by sadistic accounting trolls. They all measure something about the sale after the sale has already taken place.

Imagine for a second that you’re the VP of manufacturing, not sales. You get a call from Bob in QA at the end of the line, saying that there’s something really wrong with the finish on the hoojedees. Rushing down to the floor, you discover that the paint is streaked and uneven: clearly, not a quality job. The problem obviously is something to do with the paint. You order the line shut down, the paint system purged, cleaned, and refilled with new paint. You hang around as the line starts up. To your horror and dismay, the paint on the hoojedees is still streaked and uneven. Now the challenge is not only finding what is causing the problem, but explaining the major cost and waste to the CEO.

By the way, so as not to leave you hanging, the problem wasn’t the paint at all, it was a contaminant in an earlier stage of your manufacturing line. The reason you didn’t know this was that you were only checking the results of the process. You didn’t understand what was happening during the process.

Transferring this analogy over to our world of sales, we see some disturbing similarities. All, or at least the majority, of our measurements are trailing indicators – they measure an aspect of the sales process after the fact. In addition, we often don’t know what is really happening inside the sales process – what are the major activities that should take place at every stage of the process, and how do we know that each one was completed successfully?

Measuring only result-based indicators, and not knowing what is really happening inside the sales process will generate the same outcome for us as for our poor manufacturing VP: continued flat results with no idea of how to change them.

What should we be measuring?

We know that we still need to track results, but now we also know that simply tracking results will not give us the insight to the action we need to take. So, what should we be measuring?

The answer is to get inside the sales process, and measure some key indicators prior to the completion of the sale. In order to get inside the sales process, we need to know what the sales process is. This is the starting point and the foundation for any approach that hopes to measure, manage, and optimize sales. Defining your sales process is beyond the scope of this article, but at a minimum you need to:

  1. Know the major steps involved in the sales cycle, from prospecting through to acquiring and maintaining a satisfied customer
  2. Establish a common language and culture around the process so that everyone has the same understanding of what we mean when we say “we’re in the proposal step”
  3. Define the activities that need to typically occur at each step to effectively progress the opportunity to the next step
  4. Implement a Sales Force Automation or CRM system that will allow you to track opportunities by step in the sales process

Funnel

Your sales process is unique to you – your market, your strategy, your customers, and your unique environment. However, every sales process has some key steps that are always present in one form or another. For illustration purposes throughout this article, let’s imagine we have the following sales process:

  1. Lead generation
  2. Initial qualification and research
  3. First contact and additional qualification
  4. Detailed discovery
  5. Solution definition
  6. Proposal
  7. Negotiate and close

Assuming the process is in place, we can now start to truly measure it. In a hope to avoid the religious wars about the difference between the funnel and the pipeline, and “above the funnel” opportunities, let’s just call the whole thing the “funnel.” It includes every opportunity from initial lead generation through to close.

The most common measurement that organizations start with is “how much do I have in each step of the funnel?” This initial process measurement is one small step for a sales manager but one giant leap for the organization. It is far more powerful than the simplistic “pipeline multiple” that is in common use. For example, in some industries, a pipeline multiple of 4X is considered good. In other words, if my quota is $1M for the quarter, I know I need $4M in my funnel. The shortcoming of this old approach is obvious – what happens if the $4M in the funnel is all at step 3 or earlier? Good luck making the quarter! Prepare 3 envelopes.

The second most common measurement is sales cycle length. It’s critically important to know that it’s taking you eight months on average to move an opportunity from raw lead to closed deal.

Combining the two measurements we begin to get a glimpse of the true art of measuring sales.

However, we are still missing one key element – the most important element in sales – movement. It’s one thing to know what is in your funnel at a particular step; it’s far more powerful to know what’s moving through your sales funnel. If a sales opportunity is stagnant, if it’s hung up at step 3 of the process, it is contributing to your step 3 numbers, but will it ever contribute to your results?

Consider football (apologies to our Raider fan readers). It’s good to know that we have the ball on the opponent’s 20-yard line. It’s far better to know that we got there through a series of systematic plays and first downs – that our execution has moved the ball. Conversely, if it’s 4th and 18, we may want to try something a little different next time we’re on the 2-yard line.

Just like football, let’s call this movement of sales opportunities from one step to another, conversions.

Shape and Velocity

Like all breakthroughs, what I propose is elegantly simple in concept. The true art of measuring sales process emerges when we measure all these aspects of the sales cycle by step for each and every opportunity. We call this the shape and velocity of the sales cycle. Understanding this shape and velocity is the key to successful sales management. It unlocks the gates to forward visibility and proactive activity management.

You are probably thinking by now, “Why bother with all this detail? This seems like a lot of extra effort.” Let me deal with each of these objections separately.

Let’s say you already buy the fact that you have to measure conversion ratios and sales cycle length at the macro level. You say, “Hey, Stu, I already know my sales cycle is eight months long and it takes me 100 leads to close one order. Why should I worry about this shape and velocity stuff?” Take a look at the following two sales funnels

V-funnel-good V-funnel-bad

In both cases we have an 8-month sales cycle and a lead to close conversion ratio of 100-to-1. Which one would you rather have?

The sales funnel on the left is characterized by quick and effective qualification and focus on the deals that are most likely to close. The funnel on the right represents unqualified leads being dragged through expensive steps of the process in the hope we can “turn them around.” The sales cycle on the right is costing your organization about 4 times as much as the one on the left. I would also contend that the one on the right is actually going to result in a capacity issue. Your sales force will actually only be able to handle 25 leads, not 100 in a given time period.

“OK, fine, I buy the shape thing, but what about velocity by step? Surely that’s overkill.”

Let’s go back to our initial example of measuring the funnel size by step. Because we’ve been measuring conversions, we’ve determined that it is going take $4M of step 4 funnel to make our number. However, what happens if step 4 is taking twice as long to complete as we expect it should? We now need $8M in step 4 deals to have a hope of making the number! You get the idea.

What’s it going to take?

I know, I haven’t dealt with the “extra effort” question yet. “How am I going to get my sales force to enter all the data I’m going to need to measure this way when I can’t get them to the SFA system properly today?” Glad you asked.

The real elegance of all this is that you actually need less data than you’re probably asking your sales force to enter today. Many organizations that I’ve worked with have SFA and CRM implementations that require the sales rep to enter over 50 fields for every opportunity… and keep them all up to date. It’s no wonder you get questions like “do you want me to sell, or just fill in forms?” Demanding an unreasonable amount of information from sales reps, especially when they can see no action being taken as a result, is one of the primary reasons for lack of use of these systems. By focusing on capturing only information that can lead to a decision or action, we can dramatically reduce the amount of administrative time required by the field, and increase compliance to the requested use of the system.

In order to measure shape and velocity of the sales funnel we only require:

  1. A sales process, as briefly described earlier.

  2. Some initial standards or metrics for our expected cycle times and conversion ratios. (These can be changed in the future as we “close the loop” in the management system. We just need a starting target to measure against.)

  3. An SFA or CRM system that will allow us to capture only six (that’s right – six) fields of data: (As your organization matures, you can add more granularity by adding more fields to your shape and velocity measurement. By adding lead source, you can track the effectiveness of lead campaigns over time. By adding product type, you can predict your sales by product, or detect trends for certain products in certain territories. And so on…)

    1. Sales professional’s name (should be automatic with any system)

    2. Customer name

    3. Opportunity name

    4. Expected close date

    5. Total value

    6. Current sales process step

  1. An analytic system, like eSP, that will provide you with simple, on-demand access to the shape and velocity indicators. The key is the ability to measure time. Without time, all we know is that we’re on the 20-yard line. When we add the element of time with an analytic system, we can see movement (the key to success) and gain visibility to trends that point to required action.

So what?

Just like any other measurement, the only reason we do it at all is so that we take some action as a result of the information. Measuring the shape and velocity of the sales funnel will allow us to take some very specific actions that will result in improved sales performance. Understanding the shape and velocity of the sales cycle will give us unprecedented forward visibility. Managing the shape and velocity of the sales funnel will allow us to optimize the entire sales process for continuous improvement in sales results.

In the next newsletter, we’ll take a look at some of the practical management situations, and resulting actions that could be taken, related to measuring the shape and velocity of the sales funnel.

The Futile Quest for CRM Data Accuracy

I was wrong…very wrong.

For years I preached a story of simplicity of data in the CRM system. I was convinced that with only six pieces of data you could manage the world. Simply tell me:

  1. The customer name
  2. The opportunity name
  3. The sales rep
  4. The value of the opportunity
  5. The close date
  6. The sales stage

…and I can manage the world. In fact, I coined the phrase “The Shape and Velocity” of the sales cycle over a decade ago that was based on the theory of movement and change in the sales process by tracking changes in only three of these data points. The theory was sound; still is. The resulting insights are powerful if the practitioner knows how to interpret them. The problem is not the theory, nor the desire to keep the CRM system simple.

CRM Data

The problem is that marketing and sales involve people. People cannot be represented by CRM “fields”. People are unstructured.

To illustrate what I mean, let me ask a question. How many fields do you have in your opportunity record in Salesforce.com? How many of them are consistently and accurately recorded by the rep? Maybe 20? Let’s be generous and say that we can accurately report on 30 separate and unique fields (pieces of information) for each opportunity over time. Some of the field values change over time, so by the end of the sales cycle, let’s say we have 100 unique pieces of information for each opportunity to help us understand our reps’ and buyers’ behavior. This is called structured data; it’s in a database, categorized in fields.

Now, another few questions. During that one sales process, how many interactions (emails, chats, social posts, documents, voicemails, live conversations) take place between your company and the buyer’s company? How long is each one? How many words are written or spoken in each? (Keep in mind the average speech-rate is 200 words per minute…higher among sales reps.) Even if you have a very fast sales cycle, and there are only 20 total interactions from suspect to close, and only a few are longer than 10 minutes, you likely have over 50,000 words exchanged. How many “pieces of information” are there here? Unfortunately, even if you’re capturing all your emails in your CRM with a cool new technology like SalesforceIQ, this information is not in “fields”. This is called unstructured data.

In this little math exercise, the ratio is 100 times more unstructured information than structured. In reality, it’s higher. Another rather inconvenient truth about unstructured data is:

  • Structured data can only tell us what happened
  • It’s the unstructured data that tells us why it happened; the behavior, psychology, sentiment, intent of the buyer

I was wrong. While simplifying and analyzing CRM fields can be helpful, the real value is hidden away “inside the conversation”.

How do you personally get inside the conversation? How many calls per day do you listen to? How many emails do you read? How many could you if you spent every minute of every day doing it?

There must be a better way.

Powerful and actionable insights are hiding inside your conversations. In future posts we will explore how to find them through the power of A.I. applied to unstructured big data.

You Got Big Data! Where’s the BIG ROI?

The Big Data ROI Conundrum
Let’s face it: Big data is not easy nor does it come in cheap. Big Data has proven to be one of the most difficult conundrums for experts not only on the technical side but also on justifying its ROI. But it doesn’t have to be this. Most companies justify that as an investment to the future, something that has to be done to stay competitive whether its value will be proven or not in the near future. In other words, it is the new buzzword tech trend as concepts such as “BI”, “mobile”, “cloud” start to become commodity items.
Let’s take a moment and analyze the process of implementation of a typical Big Data infrastructure.

We can divide Big Data implementations into 3 major phases:
1. Storage, replication and retrieval.
2. Processing data.(structured and unstructured)
3. Intelligence extraction.

BIGDATAROI_1

Storage, Replication & Retrieval
Most organizations learn the hard way how expensive and time consuming it is just to finish the storage layer of the architecture. On the storage layer, there is the hardware and core software to provide the data warehouse capabilities, replication and retrieval of data. This is the stage where companies deploy their distributed file storage system like Hadoop, their document stores or key value stores or even graph NoSQL systems to provide graphical representation of data correlations. There are many vendors supporting the main open source technologies as well as commercial applications built to facilitate management of the ecosystem where the data warehouse will reside. The choices are plenty. Unfortunately, the cost and mostly time to implement is high. On top of that executives enthused about “Big datafying the stack” are disappointed when they learn that after all the time and energy spent in creating an unstructured and structured data repository, they still haven’t seen a quantifiable return on investment or that the customers still don’t want to pay extra until the value is explicable in simple and articulate terms.

BIGDATAROI_2

Processing Stored Data
After the first phase comes the phase commonly referred to as “Beyond Hadoop” phase. This is when you understand that now with all that data there is the need to actually processing it, organizing it, querying it. This is the stage where people start writing map-reduce jobs, ETL systems to try to transform data (to known relational databases), organize, query, search and make sense of that data. Again, not an easy process which demands time, investments and scarce specialized personnel.

BIGDATAROI_3

Extracting Insights
Finally we come to the last piece of the architecture: intelligence extraction. Here is where the “Text Analytics” or “Data Analytics”, “Machine Learning” solutions come into play. After going through the time and expenses of implementation of the storage layer and the processing layer, it is now time to see how to make sense of that data, especially on how to unlock value from unstructured data. That can be any type of text from properly written news documents, to user generated social media data, emails, notes, survey responses, chats, support tickets, or phone call transcripts.

After all the investment put on the first 2 layers, this layer turns out to be the iceberg under the “tip”. In order to unlock the value of that data, it is imperative that the data be analyzed through some sort of a text analytics solution. Typically, either a company outsources that to a third party Text Analytics consulting vendor or tries to develop that in house. Both of which are expensive options.

The current crop of text analytics solutions on the market are not plug and play. No matter how flashy the demos and data visualization dash boards or how tempting the promises the truth is: more time and cost to implement them. Why? Well, out of the box, those tools are not ready to produce value. They are based on statistical Machine Learning and Natural Language commodity algorithms. Yes they can help and give you basic sentiments, classifications, key NERs, but they cannot unlock the intelligence out of the data by themselves. In other words:

Conventional Text and Sentiment Analysis NLP based tools require supervised training for domain specific data because they rely on language models and industry domains. The time to train these systems and to maintain them becomes an ongoing tax.

This is when the Text Analytics companies come in the disguise of rescue providing their “professional services” to add to the mix of technologies you have already acquired. Worse even, what if you have to deal with data presented in different languages from different markets in the world where your company has presence? The contemporary solution is: again train those tools to that given language. As you can see this last phase seems to be a never ending phase. It’s like buying a car that needs an oil change every other day.

So, how do you fix this never ending ROI conundrum?
OmnIQuo provides its A.I. based cognitive computing technology, which unlike the Text Analytics solutions out there, require no training. Omniquo’s platform provides access via API’s and cloud based services so that your organization needs not go through the elaborate task of setting up all the infrastructure. Yes, omnIQuo’s tools are plug & play out the box. Not only is data from any domain, in any format (news, web sites, surveys, blogs, email, documents, and user generated content) fed to it right away but it is also processed real time. No soar taste surprises, no catches and no disappointments.

That said you can take advantage of Omniquo’s cloud based services even if you have already setup your BIG DATA infrastructure i.e. whether you are in pre phase-1 or post phase-2. If you have already invested in your infrastructure, then you can start justifying its value and get a quick return on your investment by harvesting the intelligence out of your data using the API’s and pre built services.
Omniquo’s Deep Meaning Insights  technology does not require you to program scripts to catch information. On top of that, Omniquo’sadaptive learning  narrative and events extraction engine can extract insights from data even without pre-defined template frames thereby enabling organizations to gain answers to questions that were not discoverable questions to begin with.

 

What’s so special about Omniquo Language Interpretation Engine?
Communication via language serves the need to convey a message, an idea, an abstract concept, an action, an event, report a fact among other things. That is why traditional statistical machine learning models are limited as they cannot model all the intricacies of communication. Most Text Analytics solutions in the market are built on the same DNA provided by Open NLP or Stanford NLP Projects.  These projects were trained on standard language data and hence they fail to evolve as data grows BIG.  OmnIQuo’s proprietary and differentiated approach is to use the intelligence already embedded in language instead of relying on models. The language is merely a protocol and several human languages follow a similar protocol. That is why, the type of language (English, French, German, etc.) just becomes one of the layers of the protocol. Once you are able to extract the intelligence out of the protocol the ‘language’ itself that transmitted the protocol does not matter.

Artificial Intelligence Based Interpretation of Corporate Reports & Documents

In the last 50 years, there has been a big shift in the nature and magnitude of content found in corporate annual reports. For instance, a survey that studied annual reports generated between 1965 to 2005 showed a 200% increase in the content verbiage of annual reports.

What contributed to this ‘bloat’ was the narrative part of these reports which showed an almost 400% growth just in the “narratives section”. The reason for that is quite simple: Modern corporate thinking has shaped annual reports from being a boring financial document to an advertisement and public relations document.

The organization of these reports in terms of contents is quite interesting too. Financial statements are usually presented mostly in tables and graphs. Narratives are used to inform the stakeholders about topics like company’s future plans, marketplace situations, customer based opinions, corporate social responsibility activities, competition situations etc. There is also growth in the number of imagery such as management photographs and facilities buildings that has contributed to the increased size of these documents.

The challenge with such bloated content is that the difficult correlation between text based narratives, visual financial graphics and the depiction of images make it difficult to extract the key essence, context and meaning. The key goal of prospective investors is to be able to contextually comprehend all salient aspects of the company’s business performance and potential. For example a company that is not very sound on its financial numbers but whose plan of upcoming activities and products present a solid base could be a good investment opportunity which when combined with a current low share price could make for a lucrative investment.

The above challenges show why OmniQuo’s Cognitive Computing technology can make a significant difference in the financial analysis of companies through automated review of their document collateral as well as other channels like news, analyst reports, user generated content on social media giants and microblogs.

omnIQuo Knowledge Analytics engine offers a psychological analysis of content making it possible to extract insights buried in long verbiage narratives like market perception , corporate reputation and attitudes of investments in a particular share or mutual fund.

Particularly OmnIQuo provides the following insights as well as correlation between them:

1. Narrative information understanding.

2. Events extraction and correlation.

3. Entity and entity roles definition as well as events reference.

4. Summarization of language.

5. Psychological analysis in terms of Affects.

6. Topics.

7. Annotation for quick reference of all that was extracted.

8. State-of-the-art sentiment understanding based on Affect and co-referenced to entities (people, product, companies) as well as events and full narratives.

9. Multi-document analysis and their intersecting relationships being on Narratives, events, entities, topics and affects.

Prediction is a very key important activity within the investment and financial market realm. The statements presented on an annual report concerning future plans, as well as other statements from public perception generated content or news from the specialized channels, is used by omnIQuo’s neural network based Predictive Engine to infer intelligence to aid investment decisions.

In Summary omnIQuo’s A.I. engines can automatically read documents and extract their key points, narratives, events, affects, multi-document references and predictive information. This capability helps to reduce time and optimize research by focusing on key stated facts and topics in an otherwise heterogeneous group of reports and investment data.

This is a post with post format of type Link

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim.

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus.

This is a standard post format with preview Picture

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.

Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus. Vivamus elementum semper nisi.

Read more