The Indian Student Supercomputing Challenge

Introduction

The Student Cluster Competition (SCC) was developed in 2007 to immerse undergraduate and high school students in high performance computing (HPC). Student teams design and build clusters with hardware and software industry partners, learn designated scientific applications, apply optimization techniques for their chosen architectures, and compete in a non-stop 48-hour challenge to complete a real-world scientific workload, while impressing conference attendees and interview judges with their HPC knowledge.


The United States-based Supercomputing Conference (SC) held the first Student Cluster Competition (SCC) in November 2007. The contest has been included at every subsequent SC conference, usually featuring eight university teams from the US, Europe, and Asia. As the first organization to hold a cluster competition, SC pretty much established the template on which the other competitions are based


The other large HPC conference, the imaginatively named ISC (International Supercomputing Conference), held its first SCC at the June 2012 event in Hamburg. This contest, jointly sponsored by the HPC Advisory Council, attracted teams from the US, home country Germany, and China. It was a big hit with conference organizers and attendees


The third entry is Asia Student Supercomputer Challenge (ASC) attracting more than 150 applications. Teams from China, India, South Korea, Russia, and other countries submit applications seeking to compete.


India brings up the fourth entry, the Indian Student Supercomputing Challenge (ISSC) will host 8 teams in the final this December at Techfest IIT Bombay. ISSC'16 is an inaugural year, not only many country participated but also industry partners extended their support.


Hardware and Software

When it comes to hardware, the sky’s the limit. Over the past few years, we’ve seen traditional CPU-only systems supplanted by hybrid CPU+GPU-based clusters. We’ve also seen some ambitious teams experiment with cooling, using liquid immersion cooling for their nodes. At SC12, one team planned to combine liquid immersion with overclocking in an attempt to clean the clocks of their competitors. While their effort was foiled by logistics (their system was trapped in another country), we’re sure to see more creative efforts along these lines.

There’s no limit on how much gear, or what type of hardware, teams can bring to the competition. But there’s a catch: whatever they run can’t consume more than 3,000 watts at whatever volts and amps are customary in that location. In the US, the limit is 26 amps (26*115 volts = 3000 watts.), at ISC the limit is 13 amps (13*230 volts = 3,000 watts.).

This is the power limit for their compute nodes, file servers, switches, storage and everything else with the exception of PCs monitoring the system power usage. There aren’t any loopholes to exploit, either – the entire system must remain powered on and operational during the entire three-day competition. This means that students can’t use hibernation or suspension modes to power down parts of the cluster to reduce electric load. They can modify BIOS settings before the competition begins but typically aren’t allowed to make any mods after kickoff. In fact, reboots are allowed only if the system fails or hangs up.




TEAM

To be Updated

FAQ

To be Updated

COMMITTEE

Organizing Co-Chairs


Organizing Committee


Technical Co-Chairs


Technical Committee


Expert Committee (Academia)


Expert Committee (Industry)


Advisory Committee


Committee (General)






Contact for Support: issc@techfest.org


General Queries

Hrishikesh Ekade

Manager, Events

hrishikesh@techfest.org

+91 9167820852

Registration Queries

Write to registrations_issc@techfest.org

Technical Queries

Write to issc@techfest.org


Impact

Since its inception in 2007, the Student Cluster Competition hosted at the Supercomputing conference has had broad ranging impact for students, educational institutions, industry, and the HPC community. Some of the impacts of the SCC include:
Promoting and developing HPC curriculum at the undergraduate level
Providing excellent hands-on training for the next generation in HPC
Showcasing HPC technology and its rapid development
Demonstrating how HPC can be used by everyone
Highlighting the interconnectedness between HPC hardware, software and applications to solve real-world problems
Exposing student to HPC and inspiring them to seek their role in creating the next-generation of HPC breakthroughs
Creating an opportunity for industry, national laboratories and academic institutions to work collaboratively to build the HPC community

What others are saying

"This program is a great way to develop interest and strong curricula in HPC and is of benefit to our entire community."
— Jay Owen, Director, External Research Office, AMD
"The annual SC Cluster Challenge represents an outstanding collaboration between industry and academia, supporting the next generation of users in building complex systems against stringent boundaries"
— Donnie Bell, Senior Group Manager, HPC Solutions, Dell Inc.
"we are encouraged to see the HPC community foster such a vibrant program for our upcoming HPC stars"
— John Monson, Vice President of Marketing, Mellanox Technologies
"I learned a lot about Linux and build tools that I was able to use right away in my job. I also benefited from the experience in time management and scheduling.... Every interview I had was interested in my cluster experience.”
— Sean Usher, Team University of Colorado, Boulder 2009, currently at Microsoft


Compelling Competition- Dan Olds


Speaking for myself (and probably untold millions of maniacal fans worldwide), these competitions are highly compelling affairs. The one thing I hear time and time again from students is, “I learned sooo much from this…” They’re not just referring to what they’ve learned about systems and clusters, but what they’ve learned about science and research. And they’re so eager and enthusiastic when talking about this new knowledge and what they can do with it – it’s almost contagious.

For some participants, the SCC is a life-changing event. It’s prompted some students to embrace or change their career plans – sometimes radically. These events have led to internships and even full-time, career-track jobs. For many of the students, this is their first exposure to the world of supercomputing and the careers paths that are available in industry and research. Watching them begin to realize the range of opportunities open to them is very gratifying; it even penetrates a few layers of my own dispirited cynicism. The schools sending the teams also realize great value from the contests. Several universities have used the SCC as a springboard to build a more robust computer science and HPC curriculum – sometimes designing classes around the competition to help prepare their teams. The contests also give the schools an opportunity to highlight student achievement, regardless of whether or not they win.

Just being chosen to compete is an achievement. As these competitions receive more attention, the number of schools applying for a slot has increased. Interest is so high in China that annual ‘play-in’ cluster competitions are held to select the university teams that will represent the country at ISC and SC. With all that said, there’s another reason I find these competitions so compelling: they’re just plain fun. The kids are almost all friendly and personable, even when there’s a language barrier hindering full-bandwidth communications. They’re eager and full of energy. They definitely want to win, but it’s a good-spirited brand of competition. Almost every year we’ve seen teams donate hardware to teams in need when there are shipping problems or when something breaks.

It’s that spirit, coupled with their eagerness to learn and their obvious enjoyment, that really defines these events. And it’s quite a combination.



2016

ASC16: 175 teams, 4 continents, $36,000: It’s the Amazing HPC Cluster Race

The largest student cluster competition in the known world kicked off Monday in Wuhan (or Woohan! as I call it), China. Sixteen teams representing universities from China, South America, the US, and Europe are participating in the fifth annual Asia Student Supercomputer Challenge.

This competition just gets bigger and bigger.This year, it started with 175 teams of undergraduate students vying to get into the finals by proving that they know their stuff when it comes to HPC and supercomputers. The field was winnowed down to the sixteen best and brightest, who were then invited to the finals in Woohan (Wuhan).

HUST, the Huazhong University of Science and Technology hosts the final phase of the competition this year. The undergraduate teams in the HUST finals configure and build their own supercomputer from Inspur-supplied building blocks. The teams can build as large a system as they want, as long as it doesn’t consume more than 3,000 watts during the competition. Teams then compete to turn in the best results on two HPC benchmarks (HPL and HPCG), plus other scientific applications including:

MASNUM-WAM

This sounds a lot like the name of an Asian boy band but is actually a software application that does some very interesting things. As some of you may know, about 71% of the Earth’s surface is covered by water. (This figure doesn’t even include the water in bottles, cans, or basements.) This water tends to slosh around some due to earthquakes, storms, and the like. In the cluster competition, MASNUM-WAM is being used as a numerical wave modeler to predict the behavior of large bits of water.

Aibinit

is an application used to figure out the total energy, charge density, and structure of structures made out of electrons and nuclei – like atoms and things. It also does many other complicated things like performing molecular dynamics simulations using Density Functional Theory (DFT) forces or, if you’re in the mood, generating dynamical matrices.

DNN

is an interesting challenge to the students. They’ll be using a chunk of Tianhe-2, the largest supercomputer in the world, to teach a Deep Neural Network. What they’re teaching it, I’m not sure; but they’re using hardware that consists of eight 2-Xeon nodes, each equipped with three Xeon Phi accelerators. In order to complete this task, they’ll have to optimize their Tianhe-2 cluster to achieve max performance on the data set supplied by competition organizers.

Mystery Application (ABySS)

is a de novo (meaning “from the new”), parallel, gene sequence assembler that is designed to pin together short reads into a longer genomes. A use case: if you have a bunch of short cat genome sequences spread around your lab and you want to tie them together in order to analyze a longer genome or to spot changes, you’d just fire up ABySS and let it do its thing – easy peasy. Not an easy slate of applications for budding supercomputer jockeys. This competition is different from the SC and ISC versions in that there are several cash awards for winners in various categories. The most popular two teams (as judged by Tweets and WeChats) each win 5,000 RMB (about $770 US), while the winner of LINPACK receives 10,000 RMB (about $1,540 US). Each of four winners of the “Application Innovation Award,” given to teams who have turned in the best app optimizations, receives 10,000 RMB. The winner of the ePrize, which competition organizers would like to see become the Gordon Bell prize for young HPC talents, fittingly receives a cash award of 27,182 RMB ($4,174 US). The second place team for the overall championship takes home 50,000 RMB (almost $7,700 US), while the Grand Champion nabs a cool 100,000 RMB ($15,360 US).




Structure

SC, The students begin their HPCC and separate LINPACK runs on day 1 morning, and the results are due around 5:00 p.m. that day. This usually isn’t very stressful; most teams have run these benchmarks many times and could do it in their sleep. The action really picks up in the evening when the datasets for the scientific applications are released.

The scientific applications and accompanying datasets are complex enough that it’s pretty much impossible for a team to complete every task. So from day 1 evening until their final results are due on day 3 afternoon, the students are pushing to get as much done as possible. Teams that can efficiently allocate processing resources have a big advantage.

ISC, a set of three day-long sprints. Students run HPCC and LINPACK the afternoon of day 1 but don’t receive their application datasets until the next morning. On days two and three, they’ll run through the list of workloads for that day and turn in the results later that afternoon.

The datasets usually aren’t so large that they’ll take a huge amount of time to run, meaning that students will have plenty of time to optimize the application to achieve max performance. However, there’s another wrinkle: the organizers spring a daily “surprise” application on the students. The teams don’t know what the application will be, so they can’t prepare for it; this puts a premium on teamwork and general HPC/science knowledge.

ASC, almost same as ISC but also a 10 min. team presentation at the end which adds up in the final score.

ISSC structure:


General Competition Rules


1. Safety First:Equipment configurations are always subject to safety as first consideration. If it cannot be done safely then it is unacceptable. When in doubt, ask an ISCC supervisor.

2. Hands off: No one can touch the equipment physically after the competition starts 16th (Friday) morning with the HPCC benchmark runs. If there is a need to touch the equipment an official from the ISCC committee needs to be called and will rule on the issue. The only exception is in keeping with Rule 1: if an unsafe condition is found, anyone can power down the equipment, and an ISCC supervisor must be called immediately afterwards.

3. Powered on at all times: All equipment used for running the HPCC benchmarks must be used when running the competition applications (i.e. You cannot run LINPACK on half the machine and then power up the whole system to run the competition applications). No rebooting. Reboots are only necessary for hung or failed hardware. An ISCC committee member must be notified and present for any rebooting of hardware.

4. Assistance from others: Prior to the competition, teams are encouraged to work closely with vendor partners and domain scientists to understand their hardware and the competition applications. This is a wonderful learning opportunity for the students, and we hope team members, the supervisor, and vendor partners collaborate to maximize the educational impact. Once the competition starts, student teams will not be allowed to receive assistance from supervisors, mentors or vendor partners. The five team members will be on their own to work through cluster and application issues.

5. Stay Under Power: Alarms will go off if the power draw on either PDU exceeds the 1560-watt soft limit, and point penalties will be assessed for each alarm and for not responding appropriately to the issue.

6. On Site Access Only: Teams will NOT be permitted to access their clusters from outside the local network.


Hardware Rules and Specifications


Booths of standard size and back to a solid wall or curtain. Teams must fit into this space along with the hardware for all activities and must have the display visible to the viewing public.

Teams are responsible for obtaining their cluster hardware and transporting it to the competition venue

The computational hardware (processors, switch, storage, etc.) must fit into an enclosure no larger than a single 42U rack, which must be provided by the team.

No extra cooling will be provided by the competition outside of the normal AC operations, definitely considering competition load.

Any external cooling systems must be closed loop systems and the entire system must be on the competition metered power. Once the competition starts no liquid may be removed or added to any cooling systems. (eg. no drains)

The hardware must be commercially available at the time of competition start (Friday morning) and teams must display, for public view, a complete list of hardware and software used in the system. All hardware must meet these requirements:

No hardware in the competition machine may be subject to a Non-Disclosure Agreement.

All technical specifications of all hardware components should be available to the general public at competition start.

All performance results from the competition machine can be published without restriction.

No changes to the physical configuration are permitted after the start of the competition. In the case of hardware failure, replacements can be made while supervised by an ISCC committee member.

Use of sleep states (no power off and no hibernation) is permitted as long as when all systems in the rack are powered on into their lowest running OS (non-sleep) state they do not exceed the power limitation.

The running hardware must not exceed the power limitation(1560 watts x 2 circuits); this is especially important for teams who bring extra and/or spare hardware.

Other systems (such as laptops and monitors) may be powered from separate power sources provided by the conference.

A network drop will be provided for outgoing connections only. Teams will NOT be permitted to access their clusters from outside the local network.

Computational hardware may be connected via wired connections only – wireless access is not permitted.

Wireless access for laptops will be available throughout the convention IITB Wireless.


Power Rules and Specifications


All components associated with the system, and access to it, must be powered through the 120-volt range, 20-amp circuits provided by the Conference.

Battery backup or UPS (Uninterrupted Power Supply) systems may not be used during the competition.

Two circuits, each with a soft limit of 1560-watt, will be provided.

The model of PDU the competition will use is: Geist RCXRN102-102D20TL5-D PDU.

Teams should tune their equipment never to exceed the 1560-watt limit on each of the two PDUs.

Teams should be prepared to tune their hardware’s power consumption based on the power reported by the PDUs’ power monitor, which teams will be able to read from the PDUs’ LED readouts as well as over Ethernet via SNMP.

Any time a team registers over 1560-watt is subject to a penalty.

If there are frequent blips or blips within a recognizable pattern the team will be penalized.

Teams are subject to penalization or disqualification if they ever register 1800-watt (15-amp at 120-volt) or more for any duration.

Convention center power is breakered at 20 amps and may blow before the PDU, causing delays for the team as well as expense and hassle for the competition organizers.


System Software Rules and Guidelines


Teams may choose any operating system and software stack that will run the challenges and display software.

Teams may pre-load and test the applications and other software.

Teams may study and tune the open-source benchmarks and applications for their platforms (within the rules, of course).

We encourage teams to use schedulers to run their clusters autonomously while they enjoy other aspects of the conference.


System Architecture Rules and Guidelines


Each accepted team must submit a final architecture proposal by the date listed on the SC Submissions web site for Final Architecture.

Failure to submit a final architecture proposal will result in automatic disqualification. The final architecture should be closely determined with sponsors, taking into consideration the competition applications.

Hardware and software combinations should be generally applicable to any computational science domain. This is especially important as not all applications will be revealed to the teams until at the competition.

While novel system configurations are encouraged, systems designed to target a single application or benchmark will generally not be favourably considered.

The proposal should contain detailed information about both the hardware being used and the software stack that will be used to participate in the challenge.

The detail should be sufficient for the judging panel to determine if all the applications will easily port to and run on the computational infrastructure being proposed.


For further queries, contact:


Hrishikesh Ekade

Manager, Events

hrishikesh@techfest.org

+91 9167820852

Sarthak Mittal

Manager, Events

sarthak@techfest.org

+91 9920922003