I want my data

Tim O’Reilly, Tony Coates and Havoc Pennington, all tackled a very important issue recently: data Freedom (which apparently is more important than software Freedom). They all try to come up with a definition that can be legally valid, so a new license, regarding Open Data Services, can exist in the near future.

There is not a day passing by without thinking what will I do if Blogsome goes busted. What will happen to my data? All these hours filling up these entries! Currently — like at my old Slashdot blog –, there is no way to take my data out from their mySQL database and move them to another server (if needed).

Blogsome is a Closed Services web site (like most others are too), even if it’s operating through Open Source Software. I hope Blogsome becomes one of the first sites that offers Open Data Services and allows their users to download SQL+data tarballs from their Admin pages. Think of it as “backup” if you like…

Post a comment »

This is the admin speaking...
Eugenia wrote on August 3rd, 2006 at 5:31 AM PST:

Adam, we could do that, but first I need to get my data off. ;)

Rick wrote on August 3rd, 2006 at 10:56 AM PST:

“They all try to come up with a definition that can be legally valid, so a new license, regarding Open Data Services, can exist in the near future.”

The problem is coming up with a license that is legally valid. It might actually be impossible to come up with something that is legally enforceable with the way current copyright law works.

It would be better if service providers used open services as a marketing strategy instead of some developers trying to put some legal strait jacket on their code. Besides, there’s not going to be some code that is so good that service providers will just decide to give up their rights to do with what they want with their data. And at the end of the day, it is their data because you’re using their service.

Adam wrote on August 3rd, 2006 at 12:58 PM PST:

There’s still time to migrate you over to the new version of Small Axe :)

Al Phi wrote on August 28th, 2006 at 10:57 AM PST:

You might want to start backing up now, then… it shouldn’t be all that difficult to write a script which caches the site, akin to Google’s spiders. And parse out the relevant data. Or, you could just hope Google caches it all.

Al Phi wrote on August 28th, 2006 at 11:13 AM PST:

There’s a small program called wget for Linux. It is designed to get information off http/ftp/etc. It has the slight disadvantage of having to comply with robots.txt, but I see no problems in this site’s:

User-Agent: *
Disallow: /captcha-img.php

It allows all web browsers and just disallows running the captcha script.

Anyone can run this, this will back up your entire site:
wget -r –tries=inf http://eugenia.blogsome.com/ -o log

If you need any help with this, just email me.

Comments are closed as this blog post is now archived.

Lines, paragraphs break automatically. HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

The URI to TrackBack this blog entry is this. And here is the RSS 2.0 for comments on this post.