Planet RDF

It's triples all the way down

March 15

Danny Ayers: Dave, A New Outliner (soon) and OPML (again, and another)

My my, speak of the devil, a link from Dave Winer. Not such a surprise, I just had a dig at OPML over on Don Park’s blog, commenting on OPML Revisited. Bit of a bugbear, m’fraid, I’ve ranted often enough. But it’s out there, whether I like it or not, so in the past I’ve tried to make the best of it (even if it got a little strained at times).

Dave reckons I won’t like the fact he’s going to ship a new outliner this month. Nah, I’m actually looking forward to seeing what he comes up with. It’s also good to hear he’s doing something constructive.

When I first played with the Userland kit I was very impressed, and then while working on the book I had another look at Radio. To be frank the desktop UI looked dated, but functionally it was ok, and the Web-served stuff was fine, certainly on a par with many of the other apps in this space. Some of the stuff just visible under the bonnet is pretty cool - having an object database and scripting language built-in is a major plus. Ok, personally I’d opt for a triplestore and Python, but even today we’re only just starting to see shrinkwrap apps moving to non-proprietary languages and interfaces.

Back to OPML. It’s not that I’ve got anything against it per se, it’s probably as good a data format to use behind an outliner as any. It’s only when it’s promoted as some kind of universal representation/interchange language that my hackles rise. One too many frustrating experience.

It looks like it should be great for any job you can think of, and it’s such easy syntax. Who needs RDF? But then when you try to do anything at all interesting with OPML, you discover all you’ve got is clunkified XML. Fool’s gold.

But I am looking forward to seeing where Dave goes with this new tool. You never know, I might even start using it myself (as long as the license allows you to run XSLT on your data ;-).

PS. I only just realised he’s not calling it “ Click here” but “OPML“. Aw, it would have made a good companion for Microsoft’s (click)[Start] (to shutdown).

Posted at 00:45

March 14

Danny Ayers: Link semantic redirection (or something)

Nick, who’s Hand Made Furniture I linked to the other day just mailed me to say that it had affected Google. This blog is now #3 for “High Peak Joinery". Oops. Typical Sawyeds. But I guess as long as there’s a reference through, it still might be worthwhile.

I should have remembered, there was a similar A-list case a while back - Rageboy did a thing he called Ad Googlem:

Simply place the exact string “that asshole, Dave Winer” somewhere on your blog. The quotes and comma are optional. Then wait. Google will eventually find all such references, and as they are added to the global index, the new search button I’ve provided near the top of the right column (see it over there?) will serve as a day-to-day measure of Dave’s fast-growing popular appeal.

Unfortunately for Rageboy, the plan backfired, and if I remember correctly (could have sworn I blogged about it) soon after that post his own blog was the top hit for “that asshole”. A completely different site now has the top hit. Which only goes to show.

Posted at 22:55

Danny Ayers: State of The Blogosphere, March 2005, Part 1: Growth of Blogs

Sifry’s Alerts: State of The Blogosphere, March 2005, Part 1: Growth of Blogs
Some stats from <a rel="nofollow"… Technorati. They’re tracking over 7.8 million weblogs, and 937 million links. The blogosphere is doubling in size about once every 5 months.

doomed, laddy !
Not sure why, that seems somehow scary…

Posted at 22:12

Edd Dumbill: Personal productivity in LInux

Seeing as there are currently so many handy tips about how to be organized and get things done, I thought I'd note down a few ways I get things done with Linux and GNOME.

Posted at 22:02

Danny Ayers: Filesystem-based triplestore noodling

I just got a bit sidetracked. In the process of Sharpening the Tools I’ve been getting more Linux-oriented, but got distracted, playing with some ideas for using a *nix filesystem more or less directly for RDF purposes. Two slightly overlapping ideas came to mind, one is a command-line vocab/ontology/instance data editor based on filesystem tools, the other is a triplestore. In both cases nodes in the graph are represented by names items in the fs, i.e. files and directories. For the ontology editor it would make sense to use simple names, letting the code sort out the full syntax. But for the triplestore it seems reasonable to use (suitable encoded) URIs directly as names in the filesystem. Either way, the key part is that symlinks allow you to go beyond the tree structure.

I decided to try the triplestore. I’ve started putting together a bit of Python as proof-of-concept. The code makes calls to the OS to mkdir etc. I’ve got some of the helpers in place, once these are done the next step would probably be to hook up a parser (from rdflib/pyrple) to call the methods. Then a treewalker to reconstitute a serialization (I found a basic treewalker in the Python Tutorial). Next, run it over some test cases, refactor as needed. If the proof-of-concept worked, then the next step would be probably be to recode in native C. A tangential consideration is to make the material make sense if the directories were on a Web server.

Anyhow, I’ve no idea if anyone’s done this before - the default answer is usually yes, and better, but it’s still a fun exercise. But I’m getting distracted from what I should be doing, and should put this on one side, so I’m posting here so I can pick it up again.

Here is the rest of the documentation:

“"”
Triple store based on *nix-style filesystem
subjects/resources are stored as directories (in a common root dir)
predicates are stored as subdirectories of those
objects stored as either symlinks to subjects/resources named “object", or files named “object” containing the literal data
“"”

Source: fs-store.py

PS. make that object0, object1...

Posted at 17:47

rdfdata.org: Gerald Oskoboiny's picture metadata

Google counts over 4,000. Replace the .rdf extension with .jpg to see the picture or with .html to see a page showing the picture, HTML of the metadata, and a link to the RDF.

Posted at 15:00

Danny Ayers: Des. Res.

Value for money [£52,000] one bedroom second floor flat…
grotty room

From a UK estate agent’s site.

(disclosure: I’ve lived in worse…)

Posted at 14:32

Jim Ley: Ajax… Or how an old idea needs new marketing.

PPK talks about Ajax, and there’s some talk of exactly the history of the concepts, but even in the comments they’re far from going back far enough.

Ajax itself is nothing new, Everything was in place to do it in IE in 1999 once IE5 shipped (or before you installed MSXML) - 2001 brought us it in Mozilla, when XML HTTP Request arrived. The dynamic HTML etc. needed had been there for years. Of course in reality, you don’t need the xml http request object, iframe/ilayer based systems are fine, and Brent Ashley had a nicely packaged version of this in 2000 with JSRS, but the general techniques were in use before that.

In 1998, Microsoft had offered us a Java based Remote Scripting solution.

In fact even before DHTML, Frames and basic forms had given us the possibility to update pages without the user seeing the refresh - see for example a usenet post from March 1996 discussing it.

So the concepts have existed since the beginning of scripted webpages, well since Netscape 3 anyway. The idea isn’t new.

So what took so long?

The obvious first question is why if it’s been around over 9 years, are we only just seeing prominent sites taking up the concept - I think there’s a number of reasons for this:

Will it revolutionise the world, or is it just hype?

Ajax is hype, there’s nothing wrong with that though, the product managers and marketing people need a word they can understand, “partialling updating webpages using xmlhttprequest for the server communication giving the user more responsive UI’s” simply isn’t an understandable pitch. So it needs a name, it’s a poor name, seen as verry few developers will use the XML or XHTML standard in it - XML simply because it’s too slow, and too complicated - JSON is simply simpler and faster, and XHTML simply because no-one ever uses it other than in a cargo-cult way, they certainly won’t care if it’s valid.

Even then though, I don’t think it’ll revolutionise the world, a lot of companies will blow an awful lot of cash attempting it, the current problem is still the woeful lack of competent scripters who can write such interfaces and have them degrade to an accessible, or even usable system if people change a single setting. Many of these companies will of course be putting their server guys on it, and a very out of date copy of DHTML Bible will generate atrocious, unmaintainable sites. Some might get lucky and have a competent scripter on board - there are some of us out there, but there won’t be enough to make it some great new age of web-development. Mostly we’ll see more complicated systems that don’t need the features, people are happy to move to a new page to get their flight search results, they’re only doing it briefly.

Now is certainly not a good time, we’re about to see the monoculture break down, IE7 will arrive and have a whole new set of bugs and opportunities - that’s fine for Google of course, they’re in the compatibility tests I’m sure. That’s not fine for new projects starting now, they’ve just got an open ended risk that all their work will need debugging and re-writing just as IE7 arrives. MS have traditionally done a pretty good job with backwards compatibility, but the more intensive you use javascript, the higher the chance that you’ll get hit by a bug.

Ajax is certainly hype, but the ideas are not new, and will always have their place, they just aren’t quite the all encompassing space that most people seem to think they are, it’s only in niche areas where people are spending a lot of time on a page, with only parts of it updating - email, auctions etc. The majority of sites simply won’t need it, validate a form with Ajax - you’re crazy, you’re wasting your resources…

Posted at 13:25

Leo Sauermann: CeBit Tour 2005

Lars Zapf and I are hitting the CeBit. On our tour, we are presenting the state of the art Semantic Desktop and Epos results.

o2 booth
As a by-product, there are photos!

Posted at 08:14

March 13

Dave Beckett: SWOOP Web Ontology Editor v2.2

The fine people from Jim Hendler's Mindswap group at the University of Maryland College Park, just announced version 2.2 of their SWOOP Web Ontology Editor written in Java with a Swing GUI. I've been trying it from subversion occasionally and it's come a long way and now works well as a RDF schema and OWL Ontology editor for the regular person. I've recommended it to several people as the one to get started with. For me, it's also handy for analysing exactly what features of OWL are used in RDF schemas/OWL ontologies and what puts them in OWL-DL or other OWL species. Plus it's less scary than other ontology editing solutions...

Also it has Turtle support, so I gotta love it for that!

Posted at 21:18

Danny Ayers: QOTD

Behind the firewall, nobody can hear you scream.

- Sean McGrath

[It’s about SOAP, in case you hadn’t guessed. Sean concludes with the additional clause: “…and nobody can see
your wallet bleed. “]

Posted at 20:48

Danny Ayers: Blogger Atom API

Blogger: Atom API Documentation

Not had time to look at it properly yet, buf I remember correctly it does generally look in the spirit of the last draft. It’s RESTful HTTP (using GET, POST and PUT, DELETE), with the Atom format as payload for posts. Authentication is HTTP Basic over SSL, which seems reasonable. Lists of resources are returned as <link rel="..., which was definitely the idea a while ago but at least as far as the format is concerned there was a drift away from that construct… Whatever, given that the “API” in the name was dumped ages ago (now it’s the “Atom Publishing Protocol“), I imagine it deviates from anything normative. Not that there is anything normative yet.

Posted at 20:32

Christopher Schmidt: FilmTrust

FilmTrust logo I’m not sure if I’ve posted on this before, but I noticed a few new features in the site, so I’m going to mention it again.

FilmTrust is a film rating site, much like the ratings built into NetFlix or other similar services. You rate things you’ve seen, and FilmTrust offers to you suggestions as to what you might want to look into seeing. However, instead of just basing it on what movies you’ve seen and what everyone else thought about those movies, it also uses social connections to make these estimates. You create a “friends” network, and give each of these friends ‘ratings’, which determines how much affect their opinions have on your recommendations.

According to the tour, this calculation is “… calculated using the trust ratings you have for your friends, what they have for their friends, and how those people rated the film.”

One of the cooler aspects of the project is that it is rich with information in RDF. So, you can take the information from the site, and pull it into a local RDF store, and manipulate it ot your heart’s content. If you wanted to do your own suggested ratings system by looking at the reviews that each of your friends have offered, you can do that: you could, indeed, redisplay much of the information available on the site solely by using the RDF information and doing your own calculations. (This would, I’m pretty sure, bring up the issue of copyright, so I wouldn’t recommend it without at least discussing it with the project maintainers first.)

FilmTrust is an academic research project being run by Jennifer Golbeck. More information is available on the About FilmTrust page.

Other people have already written on the topic of FilmTrust: MortenF has some nifty toys based around it, Danny’s post a month ago talked about it, and there’s always the random non-english post when you get any project large enough to get a significant following.

I’d like to see more people joining it, especially people with an interest in good computer related movies, because I need some suggestions. So, join today, and add me as a friend!

Posted at 14:57

March 12

Vincent Tabard: Escape, yes, but don't forget the Archives!

Tsunami Hazard Zone - In case of earthquake, go to high ground or inlandLet's talk about Anouck. She is creating her FOAF document, but she receives tons of spam every day, and wouldn't like her new FOAF file to be one more way for spammers to get her mail address. Therefore she uses foaf:sha1sum in order to protect her privacy, but also to be identifyable within the Semantic Web. Her mail address is mailto:anouck@example.org, and its SHA-1 hash is 11a61224bc19649d4f3f2dec2406c88eff10c19e. Her FOAF document basically looks like this:

<foaf:Person>
 <foaf:name>Anouck</foaf:name>
 <foaf:mbox_sha1sum>11a61224bc19649d4f3f2dec2406c88eff10c19e</foaf:mbox_sha1sum>
</foaf:Person>

But she hears about SHA-1 being broken. The FOAF community decide to create foaf:mbox_sha256sum, and Anouck immediately replaces her old SHA-1 checksum by the brand new SHA-256 checksum:

<foaf:Person>
 <foaf:name>Anouck</foaf:name>
 <foaf:mbox_sha256sum>a4b57c86f7efd0f7322ffe906c4e323b621f12df87006699b0728081ca092436</foaf:sha256sum>
</foaf:Person>

And she thinks everything is perfect. Her mail address is perfectly protected from the spammers. Actually, it is, SHA-256 being - at the moment - resistant to the attacks. But what about archives ? What about old copies of her FOAF file, containing the same mail address, but hashed using SHA-1? There must tons of way to grab it! Google cache, Semantic Web crawlers, etc.

If the "go to high ground" technique is a very good way to save your life in case of a tsunami, it isn't is the case of protecting your mail address from spammers, just because of the archives... I wonder if there is a way to overcome this problem.

[Side note] The tsunami sign comes from a PDF by the Departement of Geology of the State of Oregon, USA.

[Edit] You can try to generate SHA-256 checksums with this page I created, based on a class I found on the Net. Remember that foaf:mbox_sha256sum does not exist yet!

Posted at 19:39

Norm Walsh: WITW: NSDL

Norm's Service Description Language (staggeringly original name, I know) is my experiment with a simpler web services description language.

Posted at 14:02

Danny Ayers: The Long Tail

A must-read post from Joe (who?) on markets and the long tail of the distribution. The following shows the long tail of Excite search engine’s queries:

long tail

Joe leads into the potential market for software that can support the long tail of business applications. The post finishes as an advert for JotSpot, “The Application Wiki".

Now the “80/20″ law (aka the Pareto principle) surfaces everytime anyone talks about software development (for example in the design of Atom).

But that long tail suggests that in many circumstances, where you want to maximise benefits, the 80/20 mark is being viewed in exactly the wrong way. What you need is something flexible enough to support the tail, rather than optimised to the peak (the two aren’t necessarily exclusive, but things seem to tend that way).

Bnoopy: The long tail of software. Millions of Markets of Dozens.

(spotter: Andrew)

Posted at 09:57

Danny Ayers: Penguin Law

I’m really starting to appreciate working in Linux on my “new” machine, but there was a disturbance this morning. Somehow lilo and/or my MBR got corrupted, so all I got on boot was 99 99 99 99.... Now fixed (I think) but while looking for docs I stumbled on this: The Law Enforcement Introduction to Linux (PDF) by Special Agent Barry J. Grundy, includes stuff like:

VII. Linux and Forensics
Linux comes with a number of simple utilities that make imaging and basic analysis of suspect disks and drives comparitively easy.

Shouldn’t that be interrogation of suspect disks?

I lost the link, and on re-searching found a newer version, now entitled The Law Enforcement and Forensic Examiner Introduction to Linux (PDF). The forensics section seems to have been greatly expanded, with delights like “Creating a forensic image of the suspect disk". I wonder if you have to carry a torch in that upside-down way favoured by the team in TV’s “Crime Scene Investigates"?

A man was driving down the highway with a car full of penguins. Penguins sticking out the windows, penguins coming out the sunroof, penguin everywhere. A cop pulled him over and told him if he didn’t want a ticket he’d better take those penguins straight to the zoo. The man promised he would and drove off.

The next day, the same highway, the same car, the same guy, the same cop and the same penguins - only this time the penguins were all wearing sunglasses! The cop pulled the guy over and said, “I thought I told you to take these penguins to the zoo!”
“I did” said the guy, “Today I’m taking them to the beach!”

Posted at 09:27

Andrew Newman: Making Money from the Long Tail

The long tail of software. Millions of Markets of Dozens. "You know the real reason Excite went out of business? We couldn’t figure out how to make money from 97% of our traffic. We couldn’t figure out how to make money from the long tail – from those que

Posted at 01:50

March 11

Andrew Newman: Promising Project

A scalable environment for the Semantic Web "Taking a bottom-up approach to the development of the Semantic Web, a scalable and modular Knowledge Management system has been successfully validated on two university websites. Created by the IST program fun

Posted at 19:51

Danny Ayers: - Joinery, Cabinet Making and Hand Made Furniture in the High Peak

My cousin Nick’s looking at what turns up in referrer logs. So be sure and check out these nice wooden things: NB Joinery - Joinery, Cabinet Making and Hand Made Furniture in the High Peak

Must add a “nepotism” section to the sidebar…

Posted at 19:26

Danny Ayers: Work is Too Much Work

Sean McGrath :

I was a Dbase/Clipper/Smart programmer for many years.

The drill was as followed : set up you database tables. From there - without doing another *ounce* of work - you could browser the database, add records, generate simple reports.

I know of no application that allows me to do that so easily today with a web front end. Maybe I’m missing something massive.

This follows from an earlier post (talking about XForms in OpenOffice 2.0) in which he says:

It was vastly easier to create a CRUD application (a database app with Create, Report [Read?], Update and Delete functions) in the days of Dbase II than it is today.

(See also this disagreement from Stefan Tilkov).

I think Sean’s got a very good point here. It seems to me that a very large proportion of today’s user-oriented Web applications could run as CRUD. We’ve already got a fair idea how to separate content from presentation. But the business logic is usually all tangled up in a combination of SQL tables (and maybe triggers etc) and objects/procedures or other relatively hard-coded representation.

A part of the reason that middle layer has tended to grow over time is the mismatch between the logic provided by the database and the real-world data it has to manage (before anyone does a Date/Pascal on me, much of this is to do with the design of SQL databases, not Codd’s model). Another problem is that usually system design relies on up-front design of database schema (which nowadays might include XML schemas). There isn’t much flexibility for continuous development at the back end, so things get pushed forward. Not very eXtreme.

Now I wouldn’t suggest for a moment that it’s the answer just yet (it could get massive), but an avenue down which a solution may lie is (yep) replacing the back end relational database with a triplestore, SQL with SPARQL and schemas with ontologies. Bit of RESTful HTTP and XML for the transport, bit of XSLT for the presentation layer (with HTML forms, XForms, XMLHttpRequest, whatever you like). There’s no reason why the whole lot couldn’t be generated from a set of statements, although a bit of PHP template hacking may be needed in the interim…

Yes, this is the old Semantic Web chestnut rewritten for a new generation…or rather, the generation that got quicker results with Dbase years back…

The advantage of a Semantic Web-oriented store is that the business logic isn’t baked with up-front design. Initial structures may not be entirely suited to the job in hand as requirements change, but rather than trying to recode to bridge between what’s in the store and the new demands, refactor the ontologies in the back end to meet the new demands. I believe this would be considerably easier and more efficient that refactoring table structures as the wiring is more plastic - retract this (RDFS/OWL-level) statement, add that one, with no change in the instance data in the store you’ve got a different shape of database. No intermediary objects are needed, the business logic is all expressed declaratively. There’s a hint of this in places like Longwell, but I think we’ll only see it in full colour when SPARQL explodes. (But now is a very good time to play with this stuff, IMHO).

[Don’t forget that all of this is talking about a relatively discrete, standalone system. But using these technologies the network effect comes free.]

When I get chance I’ll do a proper write-up of how I’d go about Agile Modelling with Semantic Web Technologies - won’t be difficult, I’ll just follow where that title leads.
Look at this again:

The drill was as followed : set up you database tables. From there - without doing another *ounce* of work - you could browser the database, add records, generate simple reports.

Now imagine you don’t have to set up your database tables.

Posted at 17:39

Danny Ayers: Chumpologica

I’m playing with the Python/Redland aggregator chumpologica, it basically worked with only a minimal amount of tweaking but I seem to be without content - perhaps because Tidy wasn’t installed. So this is a test post so there’s something new to look at.

Yep, that worked fine. Next step down there is to get the data going into a MySQL-backed model (hmm, bet someone’s already the wrapping for that - good excuse to visit #swig later).

Posted at 10:11

Danny Ayers: Semantic Web Reference Card

The UMBC Semantic Web Reference Card is a handy “cheat sheet” for semantic web developers and programmers. It can be printed double sided on one sheet of paper and tri-folded. The card lists common RDF/RDFS/OWL classes and properties, popular namespaces and terms, XML datatypes, reserved terms, grammars and examples for encodings, etc.

UMBC eBiquity: Resource: UMBC Semantic Web Reference Card
Put together by Li Ding, Pranam Kolari, and Tim Finin.

Should be very handy, thanks guys.

daleks

Posted at 08:32

Danny Ayers: Warm!

After fairly heavy snow about a week ago, today there was a clear blue sky and it was warm. Came as a real surprise after what seems (was) months of shivering. By a nice coincidence a couple of books arrived from Amazon this morning, so I made myself comfortable.

danny reclining

The books are The Linux Cookbook and Wicked Cool Shell Scripts (the scripts). The first is an intro book, but reads very well - I decided it was time to fill in some of the huge gaps in my knowledge of basic *nix. The second - wicked, innit. Both from No Starch Press, first impressions are very good.

Basil spies a penguin -

Basil reads about Linux

Caroline took those pics, she’d been for a walk. Couple more I raided from the camera -

Primo showing off his camouflage:

Primo campanile

Sparql on a comfy oil painting:

Sparql

She had the trip to the vets to be neutered two days ago. I really hate it, but it had to be done - the books all seem to say it’s for their own good in the long run. She came back very doped up, then yesterday she was pitiful, little crying noises, “it hurts!!”. But already by this morning she was back on form, playing Pounce…Run Away (great game, I’ll post the rules sometime). While I was out in the yard she came and sat in one of the plant tubs to keep me company.

Posted at 00:53

March 10

Danny Ayers: REST, SemWeb and Messages

Queries from Julien Boyreau that I reckon are worth reposting from comments:

URI is first-class pillar in SemWeb AND Rest : do you know if there are large bridges between REST and SemWeb ?
Typically, I think RDF Arc-based relations could be WONDERFUL to describe the correlations between “Message” exchanged during an agents’ cooperation.
Do you know some activities in SemWeb about this ? Is there some ontology with a owl:Class foo:Message with foo:Property foo:sentby with range foaf:Agent ?????

I feel like I should be able to answer these with ease. But can’t.
Anyone?

Posted at 23:45

Danny Ayers: Notif termittent

Boo. I just realised I didn’t get notification email for the last few comments posted here. Not sure why, maybe a bit of over-zealous spam prevention. Wonder how long that’s been happening. Boo.

Posted at 23:37

Danny Ayers: StructuredBlogging

StructuredBlogging is about making a movie review look different from a calendar entry. On the surface, it’s as simple as that - formatting blog entries around their content.

The lesson we learned from blogging was that structured content and XML would be created, provided that the interface was simple enough and there was some value to the user creating the content.

We can provide mechanisms for creating content, reading content, and embedding content in both XML and HTML. We can do this transparently, so ordinary blog readers and writers won’t even notice the difference. And we can do this within existing content systems, without breaking RSS, RDF or Atom (we can even create a system for mapping structure to RDF, and back).

Experimental stuff, including a WP plugin from Bob Wyman and the pubsub.com folks.

Like I said earlier, interesting times…

Posted at 23:31

Danny Ayers: Broadband Mechanics 2005

Marc Canter & co. are relunching with a new site, they’ve been working on the ideas for while:

Digital Lifestyle Aggregators (DLA) are “Web 2.0″

On-line communities of end-users demand control — while open standards join small players together to provide new kinds of infrastructure. These new experiences will lead to new revenue streams for DLA operators and vendors.

BBM 2005.

Nearby (also spotted on Marc’s blog) there’s some interesting stuff from Mark Pincus of Tribe.net:

dont ask dont tell stage…today we’re at a point where officially the big players say no crawling, but unofficially they let it fly. indeed.com is quietly crawling everyone job service from career builder to craigslist. i hear that lycos has a dating service crawling all. makes sense. if i’m match.com and have the biggest db of single women, i should be in the pimping business. you can find my women everywhere but pay me if you want to meet them.

Interesting times…

Posted at 22:36

Leigh Dodds: My First Computer

A scan of the promotional flier for the Sinclair ZX Spectrum that I carried round for months prior to my parents buying me a 48K Spectrum for Christmas. Click through to the larger image to read the marketing text. Here's some extracts: "Professional power -- personal computer price!" "Your ZX Spectrum comes with a mains adaptor and all the necessary leads to connect to most cassette records and TVs (colour or black and white)" "...later this year there will be Microdrives for massive amounts of extra on-line storage, plus an RS232/network interface board" "Sound -- BEEP command with variable...

Posted at 21:46

Danny Ayers: MS Aggregator

It seems Microsoft have a MyNetscape-style aggregator on the way, integrated with MSN search (and MSN Spaces too, I’d imagine). Their test deployment at http://start.com/1/ won’t let me in: HTTP Error 403.6 - Forbidden: IP address of the client has been rejected. I guess it wasn’t meant to come out of the sandbox just yet.

(spotter, with more info: Richard)

Posted at 20:04

Copyright © The PANTS Collective. A Useful Production. Contact us.