InfiniBand experiments show extraordinary improvements in data transfer rates for researchers

Fields marked with an * are required

Subscribe to our newsletter

NCI Raijin

AARNet has been supporting the National Computational Infrastructure (NCI)’s collaboration with A*Star in Singapore for the development of high performance clouds for bioinfomatics. This collaboration, has been looking at long distance InfiniBand as a way of potentially linking A*Star’s data centres in Singapore.

At the International Supercomputing Conference in July, NCI recognised the potential of Obsidian’s innovative Longbow technology for enabling researchers from across Australia direct high-speed access to data. A*Star and NCI were keen to push the technology and to see how it would perform over longer distances.

David Wilde, AARNet’s Network Architect said,

“This is an exciting area of research.  InfiniBand is a widely used technology over short distances, but with the increasing challenges posed by gathering data in one location and processing it in another, this type of experiment could lead to new ways of bringing compute and storage together across a high-speed network.”

In collaboration with AARNet, NCI and A*STAR set up a 1Gb link between Canberra and Singapore as a proof of concept within a couple of weeks. The promising initial experimentation led to a more ambitious trial of linking NCI, Singapore and the US at higher speeds.

As there was no direct 10 GbE link to Singapore, AARNet and SingAREN worked with NCI to develop a solution.

AARNet provisioned a Layer 2 virtual private network circuit from NCI in Canberra to Pacific Wave Exchange in Seattle, USA for the purposes of the experiment and demonstrations.

SingAREN and Tata completed their side of this circuit enabling a full 10GbE between NCI and A*STAR via Seattle as a point-to-point connection between a pair of Longbow E100s.

Thanks to all those involved A*Star and NCI have an extended InfiniBand running at 10G over 30,000km and were able to demonstrate this to the showroom floor at SC’14 in New Orleans this week. SC’14 is the premier HPC conference in the USA with over 5,000 attendees from HPC centres from around the US and the world.

The results have shown extraordinary improvements in data transfers rates. NCI says a typical rsync transfer of a 381GB data from Canberra to Singapore would normally take over 4 hours. Using standard data protocols over the 10G link NCI was only able to bring the data transfer down to 3 hours whereas using InfiniBand the same dataset transfer took only 8 minutes.”

There is still a lot of work to go to make this type of transfer routine but it shows significant promise for Australian research with an ability to move large dataset from one side of the country to other and integrate theses into a HPC workflow.

Allan Williams, NCI’s Associate Director (Services and Technology) said,

“ Now that we have been able to achieve this at a global scale, we are looking to apply the results closer to home and will look to work with Australian researchers, with large data requirements, that need to use the NCI facilities or to collaborate with global partners. Longer term it may even be possible for researchers to have petabytes of data mounted directly on their desktop to further streamline access and analysis.”

Related Stories

eResearch / Featured / Network

Apr 28, 2021

Data sharing in the fight against COVID-19

A positive outcome of the COVID-19 pandemic...


Apr 23, 2021

Supporting landmark aquatic projects at Deakin University

AARNet is proud to be a project...

AARNet / Media Releases / Network

Feb 9, 2021

New AARNet terabit network to North Asia goes live for research and education

Sydney, Australia – 9 February 2021. AARNet, Australia’s Academic and Research Network, today announced the launch of high-bandwidth services delivering one terabit per second on the AARNet spectrum of the new Japan-Guam-Australia South subsea cable system (JGA South) connecting...