Did GitHub Copilot really increase my productivity?
Yuxuan Shui, the developer behind the X11 compositor picom (a fork of Compton) published a blog post detailing their experiences with using GitHub Copilot for a year.
I had free access to GitHub Copilot for about a year, I used it, got used to it, and slowly started to take it for granted, until one day it was taken away. I had to re-adapt to a life without Copilot, but it also gave me a chance to look back at how I used Copilot, and reflect - had Copilot actually been helpful to me?
Copilot definitely feels a little bit magical when it works. It's like it plucked code straight from my brain and put it on the screen for me to accept. Without it, I find myself getting grumpy a lot more often when I need to write boilerplate code - Ugh, Copilot would have done it for me!", and now I have to type it all out myself. That being said, the answer to my question above is a very definite no, I am more productive without it". Let me explain.
Yuxuan Shui
The two main reasons why Shui eventually realised Copilot was slowing them down were its unpredictability, and its slowness. It's very difficult to understand when, exactly, Copilot will get things right, which is not a great thing to have to deal with when you're writing code. They also found Copilot incredibly slow, with its suggestions often taking 2-3 seconds or longer to appear - much slower than the suggestions from the clangd language server they use.
Of course, everybody's situation will be different, and I have a suspicion that if you're writing code in incredibly popular languages, say, Python or JavaScript, you're going to get more accurate and possibly faster suggestions from Copilot. As Shui notes, it probably also doesn't help that they're writing an independent X11 compositor, something very few people are doing, meaning Copilot hasn't been trained on it, which in turn means the tool probably has no clue what's going on when Shui is writing their code.
As an aside, my opinion on GitHub Copilot is clear - it's quite possibly the largest case of copyright infringement in human history, and in its current incarnation it should not be allowed to continue to operate. As I wrote over a year ago:
If Microsoft or whoever else wants to train a coding AI" or whatever, they should either be using code they own the copyright to, get explicit permission from the rightsholders for AI" training use (difficult for code from larger projects), or properly comply with the terms of the licenses and automatically add the terms and copyright notices during autocomplete and/or properly apply copyleft to the newly generated code. Anything else is a massive copyright violation and a direct assault on open source.
Let me put it this way - the code to various versions of Windows has leaked numerous times. What if we train an AI" on that leaked code and let everyone use it? Do you honestly think Microsoft would not sue you into the stone age?
Thom Holwerda
It's curious that as far as I know, Copilot has not been trained on Microsoft's own closed-source code, say, to Windows or Office, while at the same time the company claims Copilot is not copyright infringement or a massive open source license violation machine. If what Copilot does is truly fair use, as Microsoft claims, why won't Microsoft use its own closed-source code for training?
We all know the answer.
Deeply questionable legality aside, do any of you use Copilot? Has it had any material impact on your programming work? Is its use allowed by your employer, or do you only use it for personal projects at home?