A new supercomputer at UT will transform numbers into pictures, an intuitive way of sharing information.
Beginning in January 2014, students and faculty will have access to a new supercomputer called “Maverick,” which specializes in visualization and data analysis.
The Texas Advanced Computing Center, a national supercomputer facility, recently announced its anticipated introduction of Maverick, a supercomputer that will replace its current counterpart, “Longhorn.”
“This will be a whole new system with … [a] faster processor, significantly more memory for each processor and top of the line graphic processing cards,” said Niall Gaffney, the center’s director of data intensive computing.
Gaffney said in the research process, visualization is essential to reveal patterns and trends in data that scientists may otherwise have missed.
“Visualization can show you things you weren’t explaining, which is important when you’re doing research,” Gaffney said. “I call that the ‘aha’ process. You look at something and say, ‘Oh that’s funny.’ Often the only way you find things is looking at things differently than the way you normally look at them.”
Computer science senior Bo Chean said transforming data into pictures, Maverick’s specialty, makes analyzing information easier.
“When you have words and numbers, there’s an extra step your brain has to go through,” Chean said. “Pictures are more intuitive.”
In the past, supercomputers at UT have been funded by the National Science Foundation, which has required they be available for scientists across the country. Because funding for Maverick came from the O’Donnell Foundation, a private donor and longtime supporter of UT and the center, Gaffney said the center would be able to reserve more of Maverick’s use for students and faculty at UT.
“This is a system not being funded by National Science Foundation, so we’re running this for the folks we will be working with,” Gaffney said. “About 50 percent [of its use] will be reserved for people here in the UT system.”
Data mining in social media has recently been a source of large amounts of data, and scientists and statisticians are beginning to explore its applications, Gaffney said.
“You could use this information real time from social media sites to do things very powerfully you wouldn’t be able to do otherwise,” Gaffney said. “We want to push forward on that from the data mining side and explain to people what’s going on.”
The center’s Deputy Director Dan Stanzione said in the digital age, people can easily generate large amounts of information, but the difficulty is finding significance in large data sets.
“Our ability to generate data is huge,” Stanzione said. “It’s easy to generate trillions of bytes of information. That’s way too much information to read. It’s one thing to say you have a hundred terabytes of information about cancer and it’s another thing to say you know what that means.”