What will Machine Learning mean for programming jobs?
A revolution is underway, and it may change what the craft of programming looks like for the next generation.
We are currently living through a machine learning revolution. Over the past ten years or so, for more and more problems, it’s become practical to teach computers how to solve problems rather than program them to solve problems. I suspect that advances in machine learning will transform the craft of programming; what the next generation of computer professionals does 25 years from now may look very little like what computer programmers do today.
Computers are just machines that use electricity to solve lots of math problems really fast. We take it for granted today that computers can be “programmed,” that you don’t have to design and build a new machine for each new problem you want to solve. However, people had to figure that out! In 1936, Alan Turing figured out that a very simple programmable hypothetical machine (the “Turing Machine”) could compute anything that it was possible to compute. In 1945, John von Neumann proposed one of the first practical designs for a programmable digital computer. Today, the phone in your pocket is pretty close to a “universal computing device” that just needs a new program to unlock entirely new capabilities. (Remember the early iPhone slogan “There’s an app for that”?)
Because computers are “fast math machines,” programming a computer is basically about turning real world problems into solvable math problems. Take the photos app on your phone as an example. The programmers who created that app needed a way to represent colors with numbers, needed a way to mathematically describe an entire image from individual points of color, and finally needed to make sure that the math problems they were describing were solvable with the available computing hardware. Programming requires deep understanding: How do light, images, and human perception work? What are the capabilities and limitations of the hardware I am programming? Getting to this level of understanding about anything is one of the thrills I talked about when I described why I am still a programmer.
However: Just like someone invented the notion of a universal computing device, what if it was possible to create a universal algorithm? That’s the promise of machine learning. Machine learning first took off with problems that we didn’t we didn’t know how to solve with conventional programming. For example, consider the problem of recognizing if there’s a dog in an image. While we know the math behind representing colors, nobody knows a simple mathematical function that can takes an all of the colors in an image as input and produces “dog or not a dog” as its output. If you can’t turn the problem into a math problem, you can’t program a computer to solve the problem. However, researchers discovered you could create a mathematical algorithm that takes tons of images as input, along with labels of whether the images contained dogs, and then output an algorithm that determines if an image contains a dog. It’a a crazy trick but it works in this case: We don’t know how to make the algorithm to solve the “is this a picture of a dog” problem, but we can create an algorithm that can make an algorithm to solve the “is this a picture of a dog” problem. This is the main machine learning breakthrough: With enough examples, computers can figure out the math on their own.1
I deliberately use the term machine learning instead of artificial intelligence. Artificial intelligence invites too many hard-to-settle philosophical debates about the nature of intelligence and focuses the conversation on the current quality of the output of programs such as ChatGPT and DALL-E. My focus is on the way humans program computers in the first place: Did someone write out, step-by-step, how to solve a problem, or did we let the computer derive its algorithm from examples? Debate if you will if the output from ChatGPT or DALL-E exhibits intelligence — it is unambiguous that they were created via machine learning and not conventional programming.
For two reasons, I expect machine learning will become a more widespread technique for programming computers. The first is as close to a sure bet as exists in this industry: Hardware will become more powerful and cheaper, making it practical to apply machine learning in more and more cases. I’m less confident in my second reason: I think we’re close to solving the biggest obstacle for the widespread use of machine learning, the “training data problem.” Machine learning requires a lot of data and a lot of computing resources. Training GPT-4 supposedly cost upwards of $100 million — this is not an expense you could afford on most software projects. However, you can take a general-purpose machine learning model and fine-tune it to solve the problem you face with significantly less time, data, and hardware resources as it took to create the general-purpose machine learning model. Fine-tuning a general purpose machine learning model could be a key activity in most software projects. Plus, the state-of-the-art machine learning models are becoming useful in a wide variety of problems without futher fine-tuning. A common task when faced with a simple programming problem is to see if ChatGPT can already solve the problem for you.
The increasing practicality of machine learning makes me wonder how much longer the craft of programming as I know it will remain relevant. Will it remain a valuable skill to break apart problems in the real world into simple math problems that can be easily solved by computer? Or will computers’ increasing ability to “figure out the math from examples” render conventional programming ability as valuable as darkroom skills after the invention of digital photography? We’re already firmly in an in-between world, the dawn of the era of the “cyborg programmer”: Computers cannot yet easily write most programs on their own, but machine learning techniques are making human programmers more productive through things like ChatGPT and GitHub Copilot. I don’t see the pace of innovation in machine learning slowing. Things that used to be impossible or impractical for a computer will become both possible and commonplace. If I was starting my career in software today, I would make sure to have a firm grounding in machine learning.
If you’ve heard people talk about “training a machine learning model,” this is what they’re talking about. Producing an algorithm via machine learning has a couple of drawbacks compared to figuring out the math on your own. First, the model won’t be perfect. We’ve all seen computer image recognition and face recognition make mistakes, for example. Second, we often don’t understand how or why the model works.