Skip to main content
Back to Blog

How I Got Into HPC Without a CS Degree (And What Actually Opened Doors)

10 min readCareer & AI
hpccareeropen-sourceno-degreeslurmconference-speaking

Introduction

I left Reykjavik University in 2011 after two years of a three-year CS programme. No degree. No clear plan.

That wasn't a calculated move. It was impatience. I wanted to build things more than I wanted to sit in lectures about building things. A job appeared at a small Norwegian company called Destino, and I took it. Within a few years I was a core developer at GreenQloud, a cloud startup running Apache CloudStack on bare metal in an Icelandic data centre. We had InfiniBand racks, custom Java extensions to the CloudStack provisioning layer, and a team of engineers doing things that had no Stack Overflow answers.

That's how I got into HPC. Not through an academic HPC programme, not through a national lab internship, not through a carefully planned career pivot. Through a sequence of curious decisions, a startup that got acquired by NetApp, and the realisation that the intersection of bare metal, low-latency networking, and workload scheduling was the most interesting problem space I'd ever touched.

I've been in that space ever since — building HPCFLOW from scratch at Advania, scaling it to multi-region HPC-as-a-Service as CTO at atNorth, serving customers like Stanford's Living Heart Project, then the HPC product team at Canonical, and now Lead HPC Engineer at Millennium Management. Four platforms. Four different company sizes. Four different titles. Same role: builder.

This is not a guide about how most people get into HPC. It's what worked for me. Draw your own conclusions.

The No-Degree Thing

I want to be direct about this because it's the thing people ask about most.

The HPC field is full of PhDs. Computational physicists, applied mathematicians, domain scientists who found themselves managing clusters and then building them. If you're on that path, it's a legitimate one. The academic route gives you depth in a specific domain — climate modelling, molecular dynamics, fluid dynamics — and that domain expertise is genuinely valuable.

But there is nothing inevitable about the credential path. Nothing in HPC requires a CS degree the way, say, an engineering license or a medical degree does. What it requires is demonstrated capability — and in the open-source world, demonstrated capability is legible without a transcript.

Every door that opened for me opened because of something I had built or shipped. GreenQloud hired me because I could extend Apache CloudStack. Advania trusted me to build HPCFLOW because I had cloud infrastructure experience at a company that got acquired — which meant I'd survived production at scale. Canonical saw a practitioner with multi-year HPC platform experience. Millennium saw someone who had run SLURM clusters in a demanding environment.

None of those doors required the third year of a CS degree.

What Actually Opened Doors

Open-Source Work

This is the most consistent pattern across my career. Not contributing to massive projects from day one — that's intimidating and rarely how it works. But solving real problems in public.

At GreenQloud we extended Apache CloudStack. That work was visible. It was in commits, in mailing list threads, in the kinds of conversations that happen at OpenStack summits when people recognise your name from a patch.

When I was building HPCFLOW at Advania, I was integrating OpenStack Ironic with SLURM for bare metal HPC provisioning. There were no Stack Overflow answers for that combination in 2016. Nobody had written the guide. So I wrote the guide, pushed the code, and showed up to conferences to talk about it. That combination — public code plus a conference talk — is a more effective signal than a CV entry.

If you want to get into HPC and you're starting from outside, the right move isn't to read about SLURM. It's to stand up a SLURM cluster, hit a real problem, solve it, and write about it publicly. A detailed blog post documenting what broke and how you fixed it will do more for your career than a certification.

Conference Talks

I have spoken at ISC High Performance, Supercomputing, Open Infrastructure Summit, ARM Dev Summit, and NASA Cybersecurity Day. I'm not listing those to impress — I'm listing them because the talks themselves created more opportunities than any job application I have ever submitted.

The HPC community is small. Genuinely small. If you stand at the front of a room at SC or ISC and talk about something real — something you actually built, problems you actually hit, numbers that actually came from your clusters — people in the audience will find you afterwards. Those conversations become collaborations, references, job offers.

The bar for getting a talk accepted at regional or community conferences is lower than people think. You don't need to have built a top-500 system. You need to have solved an interesting problem and be able to explain it clearly. The talk I gave at Open Infrastructure Summit in 2016 about integrating Ironic with bare metal HPC provisioning was not a polished keynote. It was a practitioner explaining what they'd just built. That was enough.

Solving Specific, Unglamorous Problems

HPC is littered with tools that only work if you already know how they work. Configuration that requires tribal knowledge. APIs that haven't been wrapped in anything usable. Operational workflows that are entirely manual because no one has written the automation yet.

I built s9s because there was no k9s equivalent for SLURM. The idea was simple: engineers running Kubernetes clusters got k9s, a terminal UI that made cluster management legible. Engineers running SLURM clusters got squeue and muscle memory. That gap was annoying. So I closed it.

That's the pattern. Not "what can I build that looks impressive?" but "what do I reach for every day that doesn't exist or doesn't work well?"

Tools built from genuine friction are the ones that find users. And tools that find users create conversations, GitHub issues, pull requests from strangers — and occasionally, job offers.

What I Thought Would Matter (But Didn't)

Theoretical CS depth

I have gaps. Real ones. No formal algorithms course, no data structures course taken in the traditional sense. I've learned what I needed when I needed it.

For infrastructure-side HPC — which is most of what I've done — the theory that matters is networks, storage, and scheduling. How InfiniBand achieves ~100ns switch latency versus ~230ns for Ethernet, and why that 130ns gap matters for MPI workloads. How SLURM's fair-share accounting actually allocates priority across competing users. How Ceph's replication overhead trades fault tolerance against write performance.

I learned all of that on running systems, under pressure, with actual users waiting for their jobs. The theory came later, to explain what I'd already observed.

Certifications

I have no HPC certifications. I have no strong opinion against them — some of the NVIDIA training is genuinely useful for GPU programming — but certifications have never appeared in any conversation about why I got a role. They didn't open a single door.

What does appear in those conversations: platforms I built, scale I operated at, problems I solved in public.

Having a "specialty"

Early in my career I thought I needed to pick a lane: systems programming, storage, networking, scheduling. The practitioners I respected seemed to be deep experts in one thing.

What I discovered is that HPC infrastructure rewards genuine breadth. The person who understands how the SLURM scheduler, the InfiniBand fabric, the Lustre filesystem, and the application's MPI communication pattern interact — that person can debug problems that specialists in each domain can't see, because the bug is in the intersection.

Four platforms taught me that the most valuable skill is the willingness to hold the whole stack in your head at once.

The HPC Community Is Actually Welcoming

This surprised me when I first showed up at conferences. I expected a gatekept academic world full of people annoyed by practitioners who didn't have the right credentials.

It's almost the opposite. The community is small enough that people remember who you are after one conference. They're genuinely curious about what others are building. There's a recognition that HPC problems are hard, that everyone is figuring it out, and that the field needs more builders regardless of where they came from.

I've met more mentors at conference dinners than I ever encountered in formal mentorship programmes. The conversations that shaped my thinking about bare metal provisioning at scale, about SLURM's internal state management, about what quantitative finance actually needs from an HPC platform — almost all of those happened at ISC or SC, between talks, over coffee that was somehow both terrible and essential.

Show up. Talk about what you built. Ask specific questions. The community will meet you there.

What I'd Tell My 2011 Self

Not the generic version. The version that's actually true.

The skills that turned out to matter: SLURM administration at real scale. Bare metal networking — not cloud networking, actual physical switches, InfiniBand fabric, RDMA semantics. Storage systems and the tradeoffs between them. Linux performance analysis. Go for building the tooling that didn't exist.

The skill I underestimated: Writing. Not documentation in the bureaucratic sense — clear technical writing that explains a real problem and a real solution. Every talk I gave started as writing. Every tool I shipped that found users had a write-up explaining what problem it solved. The engineers I respect most in this field are the ones who can build things and explain them.

What I'd do faster: Start speaking at conferences earlier. The fear is that you don't know enough. The reality is that "I built this small thing and here's what I learned" is a valid conference talk, and the feedback you get from a room of practitioners will teach you more than another month of solo work.

What I'd ignore: Salary tables. Career ladder diagrams. Lists of certifications. The external map of what an HPC career looks like is almost never accurate to what an HPC career actually is. The people I've met doing the most interesting work in this field took unusual paths to get there.

The honest assessment: I got lucky in timing. GreenQloud appeared when I had the right skills and the right amount of experience to be useful. The HPC field was expanding when I arrived at Advania. The decision to stay curious about infrastructure and keep building things — that I can take credit for. The specific doors that opened, less so.

Luck is not a career strategy. But staying technically curious, building in public, and showing up to the community — those improve the odds considerably.

Where to Start

If you're reading this and you want to get into HPC:

Get SLURM running. Not because SLURM is the only scheduler — it's not — but because it runs on approximately 60% of TOP500 clusters and understanding it hands-on will give you conversational fluency in the field. A small cluster, even a couple of VMs, is enough to learn submission, scheduling policies, fair-share accounting, and node management.

Then find a problem it doesn't solve well for your use case, and solve it in public. Write a blog post, open a GitHub repo, submit a talk abstract to a regional conference. Make the work visible.

The open-source projects I'd contribute to if I were starting today: SLURM for the scheduler itself, Kubernetes operator for SLURM for understanding how HPC and cloud scheduling intersect, and tooling projects in Go that wrap the SLURM REST API — there's still a lot of room there.

The community is at ISC High Performance in Hamburg every June and Supercomputing every November. Both conferences have student programmes. Both have practitioners who will talk to you if you have something specific to say.

No degree required.