After some first statistics about the twitter dataset, I try to get further. In this post, i’ve discussed how to extract token from Twitter, more precisely from the biography and the last tweets of users. The problem is that the bio attribute is composed of 11702 tokens. No way all these tokens are interesting (are itv1, rmer, 112/xm givin you some insight?). But how to remove all the uninteresting tokens while keeping the goods one. As always, there is a tradoff between keeping too much noise and removing some gold nuggets. In my view, an interesting token is a tag (as a tag is usually giving you some knowledge about an item). The problem is for each user, you have a set of token. What you want is a set of tags, i.e. token with interest.
I found two ways to create the keeped tags from all the tokens. The first, the whitelist one, check each token against the whitelist. If it’s not in, remove the token. If it’s in, this token is a tag. The second, the blacklist one, checks also each token against the blacklist, but keeps only token which are not in the blacklist. These two methods have drawbacks. The whitelist is more likely to remove interesting tokens which were not spotted. twitter should be a tag but if you’ve made the whitelist two years ago, you won’t have whitelisted it (or you’re a visionary). Thus, you could miss some emerging trends. The blacklist has the opposite drawback, you keep to much. Considering my goals, I choose to use the whitelist way. Now it’s the time to construct the list of tokens which are really tags.
The easier solution is to check each token in the dataset. While probably the best solution, I found it boring and not in the data mining way. Thus I use some tricks , which you can call a priori knowledge or choices :
- Trick 1: A tag is in general a english noun. English because handling multiple language will be a pain. I have no idea why it should be a noun, but all the tags I think about are nouns. Thus, we can use the wordnet which is a english lexical database. We just have to dump all noun to our whitelist. This trick takes out 7159 tokens (61%).
- Trick 2: A tag should be used many times. Could we extract a pattern of a tag used only once? I know that none of the algorithm I can use on this dataset could do that. Thus, it is useless to keep them. Of course, this decision could not be made if the dataset is growing (this tag could be more used in the future). With a threshold of 10 occurrences, this discard 10911 token (93%)
- Trick 3: A tag as more than 3 characters. As for the first trick, all interesting tag in my mind have more than 3 letters. The most frequent tokens are ‘i‘, ‘my‘, ‘you‘ (and ‘love‘, but the pattern ‘i love you‘ has only one occurrence in a profile with bio attribute “give me a love, and i will love you more :3“, funny). This trick dismiss 1193 token (10%).
- Maybe you have other tricks?
Thus, using all these 3 tricks we end with only 430 tags. Easier to manage and read. Here are the more used tags :
+-----------+----------------------+ | token | count(distinct user) | +-----------+----------------------+ | twitter | 243 | | love | 238 | | news | 208 | | life | 170 | | music | 167 | | world | 143 | | social | 138 | | more | 132 | | writer | 130 | | people | 119 | | like | 111 | | time | 108 | | have | 104 | | business | 102 | | marketing | 98 | | blogger | 88 | | work | 85 | | internet | 81 | | lover | 81 | | real | 79 | +-----------+----------------------+
As you can see, there is still many tokens which are likely to be meaningless (‘more‘, ‘have‘). Nevertheless it’s easier to see thing. Uninterresting tags are also unlikely to be considered as a pattern by our algorithms. At least we hope so 🙂
Ok, that’s was for the biography. Now the content attribute. It’s more challenging having 126,566 tokens ! Using these 3 tricks reduce the number to 3207. Could all these tags give an insight?
PS : Just to remember the SQL query
select twi.token, count(*) from twitterBioToken twi join WordNetTokens dico on dico.token = twi.token where type = ‘noun’ and length(twi.token) > 3 group by twi.token having count(*) >= 10
Let's stay in touch with the newsletter
Leave a Reply
You must be logged in to post a comment.