What Is The Data Matrix? Machines and Humans Coexist

February 18, 2005 | Category: Two Way Web

Andrew Nachison from The Media Centre raises some interesting issues regarding RSS content aggregation. He boils it down to these 3 questions:

1. Machines vs. humans? [in regards to aggregation]

I agree with Andrew's conclusion: "This isn't an “either/or” theory but a “both/and” reality." P.S. also check out The Media Center Matrix. Rich Skrenta from Topix.net has a similar view: "For comprehensiveness, algorithmic techniques will have to come into play. People-powered systems just don't scale to the long tail."

I too think there's a middle ground to machine and human/social aggregation. Currently I think we're probably nearing the peak of human/social feed aggregation, in percentage terms. What I mean by that is that the average blogger/reader probably subscribes to 80-90% human feeds and 10-20% machine feeds - and most of the latter would be egofeeds from the likes of PubSub, Technorati, Feedster (sometimes egofeeds of other people :-). So topic and tag RSS feeds are at a very early stage of adoption - companies like PubSub, Findory and Topix are leading the way.

From now on in, machine aggregation can really only increase its percentage of attention - while human feeds will decrease. But don't worry, because it's not a zero sum game. Aggregation as a market will continue to increase at a great rate. Even though the ratio of human-to-machine feeds will even up in the coming years, the whole pie will grow significantly.

2. Who profits from the exploding digital datastream?

Andrew says that "traditional media companies" have in the past "derived enormous profit" from controlling information. But fragmenting audience is quickly moving that control to content aggregators - he specifically cites Google and Yahoo.

The search and aggregation companies are set to profit. And provided a click-through is still required to access full content, niche publishers should also profit too.

One thing to watch is the brewing controversy (or browsting controversy in that case!) over full-content aggregation, which some companies are already attempting to profit from. I wonder also about excerpted content aggregation, or remixes of content - because the boundaries will surely be pushed in those areas too.

3. Who controls the datastream itself?

Andrew doesn't write much about number 3, so I'll take a punt at it. The control of content is in one sense moving very definitely towards the consumer, or reader (neither term seems to fit in this age of the read/write web!). This is something I've been exploring over the past months and which continues to fascinate me. RSS Aggregators and topic/tag feeds are two technologies that in a very real sense give power back to the user. I choose (by subscribing) what content flows into my Aggregator. I choose which of a million niche topics to track by RSS.

However as Andrew points out in his post, Google and Yahoo - and apps like Bloglines - are the main tools now for accessing the datastream. Their influence over the datastream is increasingly important - you can see evidence of this in Google's highly profitable advertising business.

Comments
Post a comment
HTML not enabled currently










Remember personal info?