Since 2013, we’ve been metaphorically peering over the shoulders of programmers to create our annual interactive rankings of the most popular programming languages. But fundamental shifts in how people are coding may not just make it harder to measure popularity, but could even make the concept itself irrelevant. And then things might get really weird. To see why, let’s start with this year’s rankings and a quick refresher of how we put this thing together.
In the “Spectrum” default ranking, which is weighted with the interests of IEEE members in mind, we see that once again Python has the top spot, with the biggest change in the top five being JavaScript’s drop from third place last year to sixth place this year. As JavaScript is often used to create web pages, and vibe coding is often used to create websites, this drop in the apparent popularity may be due to the effects of AI that we’ll dig into in a moment. But first to finish up with this year’s scores, in the “Jobs” ranking, which looks exclusively at what skills employers are looking for, we see that Python has also taken 1st place, up from second place last year, though SQL expertise remains an incredibly valuable skill to have on your resume.
Because we can’t literally look over the shoulders of everyone who codes, including kids hacking on Minecraft servers or academic researchers developing new architectures, we rely on proxies to measure popularity. We detail our methodology here, but the upshot is that we merge metrics from multiple sources to create our rankings. The metrics we choose publicly signal interest across a wide range of languages—Google search traffic, questions asked on Stack Exchange, mentions in research papers, activity on the GitHub open source code repository, and so on.
But programmers are turning away from many of these public expressions of interest. Rather than page through a book or search a website like Stack Exchange for answers to their questions, they’ll chat with an LLM like Claude or ChatGPT in a private conversation. And with an AI assistant like Cursor helping to write code, the need to pose questions in the first place is significantly decreased. For example, across the total set of languages evaluated in the TPL, the number of questions we saw posted per week on Stack Exchange in 2025 was just 22 percent of what it was in 2024.
With less signal in publicly available metrics, it becomes harder to track popularity across a broad range of languages. This existential problem for our rankings can be tackled by searching for new metrics, or trying to survey programmers—in all their variety—directly. However, an even more fundamental problem is looming in the wings.
Whether it’s a seasoned coder using an AI to handle the grunt work, or a neophyte vibe coding a complete web app, AI assistance means that programmers can concern themselves less and less with the particulars of any language. First details of syntax, then flow control and functions, and so on up the levels of how a program is put together—more and more is being left to the AI.
Although code-writing LLM’s are still very much a work in progress, as they take over an increasing share of the work, programmers inevitably shift from being the kind of people willing to fight religious wars over whether source code should be indented by typing tabs or spaces to people who care less and less about what language is used.
After all, the whole reason different computer languages exist is because given a particular challenge, it’s easier to express a solution in one language versus another. You wouldn’t control a washing machine using the R programming language, or conversely do a statistical analysis on large datasets using C.
But it is technically possible to do both. A human might tear their hair out doing it, but LLMs have about as much hair as they do sentience. As long as there’s enough training data, they’ll generate code for a given prompt in any language you want. In practical terms, this means using one—any one—of today’s most popular general purpose programming languages. In the same way most developers today don’t pay much attention to the instruction sets and other hardware idiosyncrasies of the CPUs that their code runs on, which language a program is vibe coded in ultimately becomes a minor detail.
Sure, there will always be some people who care, just as today there are nerds like me willing to debate the merits of writing for the Z80 versus the 6502 8-bit CPUs. But overall, the popularity of different computer languages could become as obscure a topic as the relative popularity of railway track gauges.
One obvious long-term consequence to this is that it will become harder for new languages to emerge. Previously, new languages could emerge from individuals or small teams evangelizing their approach to potential contributors and users. Presentations, papers, demos, sample code and tutorials seeded new developer ecosystems. A single well-written book, like Leo Brodie’s Starting Forth or Brian Kernighan and Dennis Ritchies’ The C Programming Language, could make an enormous difference to a language’s popularity.
But while a few samples and a tutorial can be enough material to jump-start adoption among programmers familiar with the ins and outs of hands-on coding, it’s not enough for today’s AIs. Humans build mental models that can extrapolate from relatively small amounts of data. LLMs rely on statistical probabilities, so the more data they can crunch, they better they are. Consequently programmers have noted that AIs give noticeably poorer results when trying to code in less-used languages.
There are research efforts to make LLMs more universal coders, but that doesn’t really help new languages get off the ground. Fundamentally new languages grow because they are scratching some itch a programmer has. That itch can be as small as being annoyed at semicolons having to be placed after every statement, or as large as a philosophical argument about the purpose of computation.
But if an AI is soothing our irritations with today’s languages, will any new ones ever reach the kind of critical mass needed to make an impact? Will the popularity of today’s languages remain frozen in time?
What’s the future of programming languages?
Before speculating further about the future, let’s touch base again where we are today. Modern high-level computer languages are really designed to do two things: create an abstraction layer that makes it easier to process data in a suitable fashion, and stop programmers from shooting themselves in the foot.
The first objective has been around since the days of Fortran and Cobol, aimed at processing scientific and business data respectively. The second objective emerged later, spurred in no small part by Edgar Dijkstra’s 1968 paper “Go To Statement Considered Harmful.” In this he argued for eliminating the ability for a programmer to make jumps to arbitrary points in their code. This restriction was to prevent so-called spaghetti code that makes it hard for a programmer to understand how a computer actually executes a given program. Instead, Dijkstra demanded that programmers bend to structural rules imposed by the language. Dijkstra’s argument ultimately won the day, and most modern languages do indeed minimize or eliminate Go Tos altogether in favor of structures like functions and other programmatic blocks.
These structures don’t exist at the level of the CPU. If you look at the instruction sets for Arm, x86, or RISC-V processors, the flow of a program is controlled by just three types of machine code instructions. These are conditional jumps, unconditional jumps, and jumps with a trace stored (so you can call a subroutine and return to where you started). In other words, it’s Go Tos all the way down. Similarly, strict data types designed to label and protect data from incorrect use dissolve into anonymous bits flowing in and out of memory.
So how much abstraction and anti-foot-shooting structure will a sufficiently-advanced coding AI really need? A hint comes from recent research in AI-assisted hardware design, such as Dall-EM, a generative AI developed at Princeton University used to create RF and electromagnetic filters. Designing these filters has always been something of a black art, involving the wrangling of complex electromagnetic fields as they swirl around little strips of metal. But Dall-EM can take in the desired inputs and outputs and spit out something that looks like a QR code. The results are something no human would ever design—but it works.
Similarly, could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future? True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks. And instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh.
What’s the role of the programmer in a future without source code? Architecture design and algorithm selection would remain vital skills—for example, should a pathfinding program use a classic approach like the A* algorithm, or instead should it try to implement a new method? How should a piece of software be interfaced with a larger system? How should new hardware be exploited? In this scenario, computer science degrees, with their emphasis on fundamentals over the details of programming languages, rise in value over coding boot camps.
Will there be a Top Programming Language in 2026? Right now, programming is going through the biggest transformation since compilers broke onto the scene in the early 1950s. Even if the predictions that much of AI is a bubble about to burst come true, the thing about tech bubbles is that there’s always some residual technology that survives. It’s likely that using LLMs to write and assist with code is something that’s going to stick. So we’re going to be spending the next 12 months figuring out what popularity means in this new age, and what metrics might be useful to measure. What do you think popularity should mean? What metrics do you think we should consider? Let us know in the comments below.
From Your Site Articles
Related Articles Around the Web