Twitter is a famous social website. It works like a blog but limits the message length (160 characters). Thus, it is also called micro blogging and should be get more frequent update about every thought you could have. Could we do something of such atrophied data?
I’m only at the begining of this project. I have settle a basic crawl infrastructure in order to extract a dataset from twitter and mine in it.
The taken data have five attributes : user name, location, followers count, following count, biography (a small who am i field) and the concatenation of theirs last messages. Below is a exemple of a profile, a public person named Richard Bacon. In this example, you could figure how complex these information are. The location is quite unclear (GPS coordinates). The biography is quite small (but really clear on this example). And the content is … confusing.
id: 1351 name: richardpbacon location: iphone 51.511682 0.224661 nbFollowing: 72 nbFollowers: 360574 bio: minor celebrity bbc radio fivelive presenter content: yep she tweeted sunday her tweet alone theyd have run monday news 10 asking susan boyle backlash she overrated sounds like someone team listened 5live way work sounds like someone news 10 team (...)
Actually, the content field displayed above was already treated. I’ve use Lucene in order to tokenize and clean the text part. Bellow is the text before and after applying Lucene in order to get tokens instead of free form text.
before : News at 10 asking, is there a Susan Boyle backlash / is she overrated? Sounds like someone on the team listened to 5live on the way to work. after : news 10 asking susan boyle backlash she overrated sounds like someone team listened 5live way work
As you can see, there is still a lot of meaningless tokens like 5live.
I have done a quick (not so much data, not a god algorithm, not so much cleaning) segmentation on only the biography tokens. Nevertheless, trying with 25 clusters, things start to emerge. For instance, a cluster has a high relative frequency of tokens like university, engineering, computer, student, science, studying, school. This is a students cluster (3% of my dataset). There is also a cluster for official public people (twitter, page, official, feed), some geeks clusters (one for geek users of mac or linux, one for open source software developers, another for web developers), a companies twitter account cluster (tokens like company, services, production, advertising, leading) and a photographs one (photography, make-up, light, photo, traveler).
More work has to be done, but the first insight are encouraging.
Let's stay in touch with the newsletter
June 4, 2009 at 10:08
can you make the source available for download plz ?
June 4, 2009 at 14:46
Here is some code. I use my own data mining framework, but it should be easy to port it for Weka.