In 2005 I began developing a tech recruiting solution for a problem I regularly faced: How can my company efficiently distinguish between qualified and unqualified programming candidates?
If you point your browser to websites like zhaopin.com or 51job.cn you will see lots of boxes with logos and links to job offers. In May 2005, one of these boxes had a logo of Exoweb (at that time a small Beijing-based outsourcing business I was involved with), and at the other end of the hyperlink my co-workers Ken, Bjørn and I were waiting for candidates.
Several weeks earlier Exoweb had landed a promising contract with a Norwegian customer, and we hungry for programmers. Hungry, yes, but still picky. Candidate quality mattered. We decided to adopt quality measures outlined in Joel Spolsky's Guerilla Guide to Inverviewing. In particular, we were running independent interviews (up to 3 tech interviews per candidate) and we were diligently checking whether candidates could actually write correct code. The overwhelming majority could not.
It quickly became clear that, though we were spending long hours in interviews, we couldn't efficiently assess the expanded candidate pool without sacrificing critical knowledge about each potential hire.
Beijing is a great city and there are many more exciting things to do than interviewing programmers who cannot write programs. We had strong motivation to figure out a way to improve the process. The inspiration came from Olympiad in Informatics, the ultimate programming competition for high-school students. I had some experience with the Polish chapter of the Olympiad and I was aware that this institution had been employing automated tools for assessing solutions of programming problems since 1990s. I decided to adopt a similar means in our tech candidate screening.
Ken and Bjørn were a bit skeptical about the amount of work that had to be invested, but surely interested in freeing up some of their time on Interview Saturdays. I hacked up an automated evaluator called "Exobench" and we set up one machine as a workstation for candidates. From then on, every candidate had to sit in front of a computer and deliver a solution of one simple programming problem using a set of standard programming tools (editor + compiler). Our recruiter later ran the automated evaluator to determine whether the candidate would stay for tech interviews.
We were reviewing the solutions of the rejected candidates to make sure we weren't overlooking talent, but we were increasingly positive that candidates unable to get few lines of code straight should never tinker with our complex distributed web application.
By then, we had screened 2,500 people, interviewed 250, and hired 50 (I rounded a bit, but the actual numbers were close). Given that we usually organized two to three independent tech interviews per candidate, the automation saved something between 4 and 7 thousands hours of Senior Engineer time.
Based on my experience with Exobench, the team at Enpoka.com created Codility, an online system for programming skills assessment. In 2009 Exoweb introduced Codility to its tech recruiting process, becoming our first large customer to generate 1,200+ evaluations per month.
Since then, Codility has grown in complexity, flexibility, and application. We now help our clients source, assess and interview their tech candidates, and do so with unparalleled focus on customer care and candidate experience. We've kept our key tenants since day one: uncompromising quality, automation, focus on distilling practical programming fundamentals and outstanding candidate experiences. These goals have been core to Codility’s tech recruiting DNA ever since, and feed into our how we build our own agile engineering team.
Ready to experience how Codility can help your team hire stronger programmers faster today?
© 2009–2016 Codility Ltd., registered in England and Wales (No. 7048726). VAT ID GB981191408. Registered office: 107 Cheapside, London EC2V 6DN