Wednesday, 31 July 2013

How to monitor threats and abuse on the Internet with minimal effect to civil liberties

Dr Andy Edmonds, Concept Strings,, Http://


Although many in government long to detect and prevent abusive and threatening texts and images on the internet, current technology is not up to the job. Inaccurate net filters are a threat to the liberty of all those who would be wrongfully accused or blocked, and represent a real headache for the various service providers and search engines who are currently receiving so much ire. An objective and accurate system that is also traceable, i.e. can explain why a piece of text was deemed abusive, is needed. We at Concept Strings believe we have such a system.

In the UK and other countries, politicians are making some loud noises about the internet and some of its more negative aspects.
The two key areas, detecting and blocking child porn and the detection of abusive and threatening tweets, blogs etc. present huge technical challenges.
Politicians have been seen to be impatient with various organizations, like Google and Twitter, but being politicians, rather than technologists, they don’t realise that what they ask is in some ways beyond the capability of current technology.
If you use a piece of software to categorize text or images, there will always be so-called “false positives”, i.e. harmless images or texts that are deemed to be harmful, and “false negatives”, harmful images or texts that are deemed harmless.
Clearly if you’ve posted a harmless tweet and the police turn up outside your door, your civil liberties are about to be seriously compromised.
False positives in this area can cause real problems for those who are wrongly accused, and you can understand Google or Twitter not wanting to get involved in this. Every false positive is a potential lawsuit.
Ignoring images, where the techniques are very different, the problem with conventional text mining software that might be used to detect abusive text is that it’s not very accurate. If you consider Sentiment Mining, which looks at tweets or blogs that mention a product or brand and tries to infer positive or negative sentiment, accuracy is normally only around 75%. This doesn’t matter much for sentiment mining, where it’s the trend that users are interested in.
However, using such techniques on twitter to identify abusive or threatening tweets would cause chaos.
Part of the problem is that almost all text mining techniques rely on word frequencies and opaque models derived from them. Not only are they not particularly accurate, but it’s almost impossible for the layman to work out how they arrived at their classification.
This is a nightmare for an organization that might be asked to defend in court why it classified a piece of text one way or the other.
It’s worth mentioning that some organizations provide services using “dictionaries of abuse”. These are hard to maintain, also inaccurate (it’s quite possible to be abusive or threatening without using abusive words), and easy to circumvent.
An ideal system would be easy to set up and change, have few false negatives or positives, and have traceability, i.e. it should be immediately obvious why a particular piece of text was categorized. Ideally false positives and negatives should be fed back into the system to improve results.
As you’ll have guessed, Concept Strings has been developing just such a technology. It represents a complete break from conventional word frequency techniques, instead using concepts rather than words to recognize ideas being expressed in text. It’s a Natural Language Processing technique, but it makes use of ideas from machine learning and DNA sequencing to recognize sequences of concepts.
To use it you create templates of the kind of text you are looking for. The system then recognises the sequence of concepts implied in these templates, gives you the chance to edit them, and then can search incoming text highly efficiently for sequences of concepts that match.
The great power of this approach is that a handful of templates can match thousands of ways of saying the same thing. Our system uses internationally recognized thesauri not only to recognize words that might mean the same thing, but also words that are a kind of the concepts in the template. Thus a template containing “horse riding” would match “pony riding”, ”palomino riding” and many others.
The traceability is inherent in the use of templates. Any match can be defended easily in the boardroom or the court, and any problems in the templates can be easily corrected by any intelligent native speaker of the language employed.
Concept Strings would love to talk to anybody who might be interested in this technology, which is available in SDK form.
Please send any expressions of interest to

May be reproduced freely, wholly or in part, so long as the attribution to the author and company is included.

No comments: