Last night 25 million people watched the Republican debate that aired on CNBC.  Thousands more were huddled around conference room tables checking facts and feeding information to even thousands more people who were determining the relevancy of every spoken word and expression.

Within 30 minutes of the start of the debate, npr.org had already published their ‘who’s spoken the most so far’ article, and before the closing credits, the final report had already hit the wire.  A little later in the wee morning hours, NBC had published a play-by-play of all of the night’s events, including who-said-what and when.

Although some of this may be automated and technology-driven, most of it is stillcompletely manual.  Can you visualize a room full of people with stopwatches making sure that they don’t miss a single second of the action?

This information is certainly valuable, but how much of it is relevant?  Is there any correlation between who spoke the most last night and who’s going to win the primary, or if they’ll even make it to the next debate?  We focused less on these questions and more on getting the data back as quickly as possible and creating timestamped files that included not only spoken words, but closed captions, face, brands, and text on the screens that can be meaningful in the complete picture, including the live social feeds that were running at the bottom of the screen.  We call this our all-inclusive approach, or our 360 view, of taking lots of data and turning it in to information that leads to relevant facts supported by objective data.  A picture is worth a thousand words, or in this case a ‘frame’, and there were literally thousands of words and references that were produced with every minute of the broadcast.

The All-Inclusive Approach

Why is this important?  When capturing data, it’s not just enough to get an audio transcription.  Everything else that is going on matters.  The closed caption supports or reinforces what the candidates are saying by insuring that all dialog is truly captured.  (For example, Jeb’s mic failed near the end of the debate and some of his speech was inaudible.)  The social stream that was running also lends sentiment to what viewers are posting online and although the broadcaster can pick and choose which posts to run, it can provide insight to activity going on across social channels.  It’s not just about capturing concrete data, like audio, the picture is much bigger and there’s much more to it.

Back to the room full of people with a stopwatch.  That subjective process can be called into question unless backed up by real data.  Our all-inclusive approach can actually validate the the headlines that you’re reading today and give credibility to the facts.

Getting relevant, timestamped data that is accurate into the hands of the people who need it as soon as possible provides an advantage for one technology over another.  That’s what we call relevant data using an all-inclusive approach.

Share This