I nearly got into an argument right before Xmas with work colleagues about Gen AI. I had made some snarky comment and got some pushback that they didn’t understand why I wouldn’t use it because it generates good code if you use it right. I ended up just ignoring the pushback and then the break helped the whole thing go away.
That meant I could avoid getting into the awkward conversation about my reasoning. Because my response would have been that whether or not it is an effective generator of good code is irrelevant to me. My distaste for LLMS are in moral and ethical grounds. The ethics of using a tool built on theft. The morality of using a tool that requires so much waste of water and power in the beginning stages of an existential crisis triggered by climate change.
There’s axiomatically no way to live a moral life in capitalism. So I don’t like to get into that with work colleagues. I’ll tease my friends about their moral failings like working for AWS, but it’s closer to sanctimonious to push that kind of position with people I only know because we have the same employer.
To some extent though I am finding myself the last couple of days wondering how, or if, Deepsink and its apparently significantly reduced impacts in training phase alter the equation for me.