Have you ever wondered how something called social media can contain so much unsocial sentiment? It’s quite ironic. But fortunately for those charged with applying their tradecraft to detect and protect against threats, that negative sentiment can be spotted — as long as the right analytic tools are deployed.
Of course, social media is just one of the literally millions of places security personnel should look for indications of real threats to the people and organizations they’re charged with protecting. Websites, online forums — even blogs (but not this one) — contain nuggets of information in their billions of pieces of data that can indicate potential peril.
All this information is overwhelming to people. Machines, however, can make sense of the volume, diversity, and complexity of this massive stream of data, sense that can help you determine whether an online rant is simply an opportunity to blow off steam or a plot to blow something up. They can also make sure the nuances of a particular language that might help differentiate idle talk from an imminent threat aren’t lost in translation. And perhaps most important, machines can do all this in time for you to take the steps necessary to protect people and property.
But first you have to train the machine to understand and interpret what it’s seeing, which is not a simple task. First, the algorithms need to sift out commonly used words to eliminate the noise in social media data streams. Next, potential threats have to be differentiated from threatening language. “Thanks to the @abccorp execs my portfolio got slaughtered!” is very different from “Someone should slaughter the @abccorp execs.” Knowing the difference takes native language searches, not literal translations.
Finally, the algorithms need to identify imminence. Imminence provides a dimension of time to threats targeting your specified assets, locations, or events. For Government agencies, imminence may also be determined by assessing the influence of the person making the threat. Did the rhetoric come from an angry individual or someone with a record of getting people to take action?
With all these analytic requirements, it makes sense to employ multiple, highly specialized threat assessment engines. This approach lets separate algorithms focus on detecting violent threats; nonviolent threats, such as disruptive demonstrations; proximity threats, such as those not targeted at your interests but rather at something nearby; and event threats aimed at a planned occasion, such as the Olympics, with a defined location and date. When properly deployed, machine learning algorithms have an impressive track record of identifying threat indicators buried in social media and other online channels.