Story by AARNet’s eResearch Director, Guido Aben
Guido Aben will be presenting at the Terena conference (TNC2013) in Maastricht on 4 June 2013
Transfer a Terabyte size file in around 5 hours: learn about the Terabyte Challenge
The history of CloudStor, or FileSender as the underlying software is called, starts somewhere in 2007. For a little context, in 2007 YouTube was 2 years old; Facebook had just turned three.
It wasn’t immensely apparent yet back then, but keen (some would say starry-eyed) observers could see the signs of the internet beginning to morph from being purely a consumption platform to a creation platform.
If you had real faith, and you squinted, you might even see the signs of browser builders adapting to this model, too. Still, Internet Explorer retained 80% market share in 2007.
Google Chrome didn’t exist yet (launched 2008). HTML5, the standard that today enables production and creativity using browsers, only got properly ratified in 2011, and even then only in part – it is still a living, moving standard today in 2013.
In short, the idea of assuming that a browser, just a browser, would soon be a powerful creation instrument certainly was a little outlandish. Anybody who wanted to do something slightly funky using browsers used add-ins and applets; typically either Java or Adobe Flash.
In fact there was bit of a class distinction going on – commercial, slick applications preferred Flash, serious-minded, science types would naturally opt for Java applets. And so it certainly was in 2007; lots of research tools were written in Java, with interfaces looking like Java. It worked, but not as smoothly as YouTube.
In 2007, AARNet received a pilot grant from the Department of Education Employment and Workplace Relations (DEEWR) to demonstrate the merits of connecting TAFE institutes to AARNet. Part of the money went to building real broadband enabled applications, one of which was a “large attachment inbox” for practitioners. It was well received but definitely needed polishing.
Serendipitously, later that year during a meeting in Europe about R&E networks and storage, it turned out that a handful of other research networks had also tried their hand at such a “large attachment facility”; most of them because they could no longer bear the thought of users sending hard-disks and thumbdrives when a perfectly good network was available.
All of these projects were at a similar stage: they worked, but looked like escapees from a test lab. There and then, three of these networks – AARNet, as well as the Norwegian and Irish R&E networks, UNINETT and HEAnet– decided to meet this challenge head-on and do it properly. No copouts – it had to be delivered using nothing but a browser (no Java allowed!), and there couldn’t be a limit on the size of attachments, even though at the time no browser could deal with files over 2GB. Oh, and all of this on a shoestring budget, too.
It’s probably unnecessary to point out we’d lined up for a punishment banquet. Let’s just fast forward through a number of years battling absurd bugs, horribly fickle standards, undocumented features from major vendors, and take a look-in again around late 2009. That’s when the first beta service based on the new software went live.
AARNet took that plunge, ahead of the others, but our gleeful field reports made them follow suit, and by early 2010 we had all three project pioneers (AARNet, HEAnet and UNINETT) on a FileSender installation.
The rest of the R&E networking community soon caught on. By 2010, four others had joined; in 2011 six more joined, and in 2012 another 13 countries followed, including Internet2, our American counterpart, bringing us to a total of twenty-six sites.
Today, ten of those 26 sites contribute funding; nine of them have contributed code or man-hours; almost all have turned in translations in their local language. What started out as a shoestring project by a couple of NRENs (National Research and Education Networks) who had had enough is now a de-facto standard for file transfers in academic networks.
Domestic numbers are equally encouraging; where we started with perhaps a dozen new punters every month early in 2009, CloudStor now greets about 100 new users each week, and transports just shy of a Terabyte each week.
That’s what it’s all about: making life easier for our connected researchers. And we’re not done yet; 2013 is shaping up to be a good year for CloudStor, with features like multi-file upload and encryption in the pipeline.
Another much-requested feature was faster uploads; the US research network Internet2 even challenged us to transfer a Terabyte in less than a business day. We took them up on the challenge, issued a Terabyte Challenge to a number of summer students, and sure enough, one team managed to put a number of tweaks and improvements into the base FileSender code that made it possible to almost saturate a 1 Gbit/s link and consequentially transfer a TB file in just under 5 hours.
This bodes well for a number of scientific disciplines that have seen an explosion in file sizes over the past few years – I’m thinking genomics in particular, where single genomic sequencer runs are already often over 100GB and showing no signs of slowed growth.
CloudStor+ (personal storage – 100GB free for eligible researchers).
Sep 30, 2019
Aug 29, 2019
Jul 19, 2019
The new architecture, called “Infinite Scale”, will dramatically reduce database requirements and enable geo-replication in the near future. It also allows the introduction of a scalable workflow engine. AARNet (Australia's Academic and Research Network), is co-developing ownCloud Infinite Scale and...