In my defence, I manually verify every test/calculation by hand, but so far copilot is nearly 100% accurate with the tests it generates. Unless it is something particularly complex you’re working with, if copilot don’t understand what a function does, you’ve might want to check if the function should be simplified/split up. Specific edge cases I still need to write myself though, as copilot seems mostly focused on happy paths it recognise.
I’m a bit of a TDD person. I’m not as strict about it as some people are, but the idea of just telling AI to look at your code and make unit tests for it really rubs me the wrong way. If you wrote the code wrong, it’s gonna assume it’s right. And sure, there are probably those golden moments where it realizes you made a mistake and tells you, but that’s not something unique to “writing unit tests with AI”, you could still get that without AI or even with it just by asking it to review the code.
I’m not dogmatic about test driven development, but seeing those failing tests is super important. Knowing that your test fails without your code but works with your code is huge.
So many unit tests I see are so stupid. I think people just write them to get coverage sometimes. Like I saw a test the other day a coworker wrote for a function to get back a date given a query. The test data was a list with a single date. That’s not really testing that it’s grabbing the right one at all.
It’s just sort of a bigger problem I see with folks misunderstanding and/or undervaluing unit tests.
Whenever I see someone say “I write my unit tests with AI” I cringe so hard.
In my defence, I manually verify every test/calculation by hand, but so far copilot is nearly 100% accurate with the tests it generates. Unless it is something particularly complex you’re working with, if copilot don’t understand what a function does, you’ve might want to check if the function should be simplified/split up. Specific edge cases I still need to write myself though, as copilot seems mostly focused on happy paths it recognise.
I’m a bit of a TDD person. I’m not as strict about it as some people are, but the idea of just telling AI to look at your code and make unit tests for it really rubs me the wrong way. If you wrote the code wrong, it’s gonna assume it’s right. And sure, there are probably those golden moments where it realizes you made a mistake and tells you, but that’s not something unique to “writing unit tests with AI”, you could still get that without AI or even with it just by asking it to review the code.
I’m not dogmatic about test driven development, but seeing those failing tests is super important. Knowing that your test fails without your code but works with your code is huge.
So many unit tests I see are so stupid. I think people just write them to get coverage sometimes. Like I saw a test the other day a coworker wrote for a function to get back a date given a query. The test data was a list with a single date. That’s not really testing that it’s grabbing the right one at all.
It’s just sort of a bigger problem I see with folks misunderstanding and/or undervaluing unit tests.