As of recent, ChatGPT and other powerful AI models have obviously been talked about.. everywhere. Having not used it myself, I could not distinguish reality from hype. So, I spent some time with it – enough to form Opinions! I’ve broken them into a few thoughts, and then some more serious topics on AI in general.
Pros And Cons of GPT itself
Here is my original impression after a few hours of talking to my robotic pal.
Pros
- If you give it a well defined problem, it can (sometimes) generate an almost perfect answer faster than I could myself. I have great hope this will reduce some of the nonsense LeEtCoDe – if there is one application this will surely excel at, it is complex algorithms for well defined problems. If the average programmer now has these tools at their disposal, there is no need to implement quicksort yourself (not that there was before,
.sort()) but you get the idea. - Soon, or maybe even now, it should be the goto before Google (which inevitably lands you on stack overflow) for most programming/debugging questions. Sometimes it takes quite a bit of hunting on SO – you start with a search result, turns out it’s not really asking the same thing, but oh someone marked it as a duplicate of this other thing, you follow that link, and so on – eventually you end up where you need to be, or you will be the poor chap that has to ask a new question. But so far, GPT “understands” what you’re asking pretty darn well – and if it can’t get you the answer, it can get you on the right path faster.
Areas of Improvement
- It must learn to ask clarifying questions (without you asking it to ask you questions..)! Sometimes, it is VERY confident in it’s wrong answer, probably because it always tries to “guess” what you mean and give the best answer it can for it, rather than “making sure” it understands what you want. It would require less iterations if it probed a bit before giving an answer.
- The disclaimer on their page says, “it may be out of date for events after 2021”. Does that mean new software libraries/versions/packages since then too? Software, at least some of it, moves fast, so there needs to be a constant “additive training” process, and a method to identify those sources of new data to train it with. There is a bit of work ahead.
Will I be Unemployed?
Lots of talk of this in programming circles lately. Should we start training to become plumbers and electricians?
The majority of my time as a senior engineer is not spent programming, at least not consistently. My job is:
- turning vague wants into products or other value to the company
- teasing out what’s really being asked from unclear reqs and vague tickets (see AOI 1 above)
- once we have that, figuring out what the MVP looks like
- from there, prioritizing all the features we thought of
- planning your resources, time constraints, release cycles, when to cut versions, and so on
- then we get to program, or at this point delegate the task off. Until the requirements change, and you repeat.
By the time you are at the last step, assuming you’ve done steps 1-5 well, you’ve already “won”. You have reduced the monstrous “software engineering” task into a “programming task”, and that point is where I think our AI friend can really help us.
I (at the current time) do not see these AI tools performing these tasks. Handshakes are what make this world go round – not code. Everything above requires people talking to (and convincing..) people. Maybe there needs to be one less junior engineer on the team to implement those tasks. Until I am convinced that these models learn to navigate their way through a sea of uncertainty, I don’t seem them as “engineers” but rather “helpers”.
Terminator-ey Stuff
You may have seen that Midjourney was yanked for deeply convincing deepfakes, and scientists have pressed the panic button on increasingly powerful AI.
I think they are right – not so much due to the reasons specified there, about the “space race” – but more to their point that society has NOT sorted out many of the potential consequences of powerful AI yet.
Here’s a few of my largest worries about powerful AI (in general – not necessarily these LLMs like ChatGPT):
- Faked court evidence. I don’t think video rendering AI is quite here yet, but imagine it’s possible to frame someone for a crime by altering video footage of a crime using samples of someone’s face in such a lifelike way that it becomes impossible to distinguish. Does the justice system have to change because of AI, e.g., requiring more evidence, if footage could no longer be trusted?
- Faked (and dangerous) audio. Imagine I have a tape recorder in my pocket and collect N clips of your voice. I take all the samples, then I pass that through an AI capable of synthesizing your voice. I could generate a threatening recording, or maybe something very embarrassing.., or perhaps generate a fake political scandal that sways your upcoming election, or… “I swear, that’s not me!”. That is scary stuff. Lives can be destroyed like this. For 200,000 years, we could trust our eyes and ears – is that period of humanity over?
- social media (or any electronic media) bots completely indistinguishable from human accounts – i.e., they can even go sign up for a google voice number to receive a 2FA on. It is already hard to fight misinformation and propaganda – bots of this capacity would certainly take it to the near impossible level
- if you really want to have nightmares tonight, consider this
It’s not that poor me will be out of a job that I’m worried about.. it’s that innocent people could be jailed or have their families torn apart, the end of democratic elections, etc.
Futuristic Applications I’d Love Before My Grave
Here are some random things I would love to see possible with AI, again not necessarily with LLMs but in general, in my lifetime:
- I would love to give an AI a complete, large, working application, with unit tests – that is a very good starting “prompt” – and tell it to convert it to another language, including isomorphic libraries and all. For example, I want to give it a complete Flask+uWSGI application and say “go write this in rust/go/elixir/etc and it should be at least as performant and at least as well tested” – and it should not implement it completely from scratch, but rather use the same packages (eg webserver) that a human expert in that area would. I think we are probably not too far away from this. You could prototype something in a language you love and have it spit out a well performing but painfully written application!
- I want to be able to feed an AI my favorite TV shows, from the past (ended or cancelled), and have it generate new episodes, ideally well written.. We are probably very far from this – not only does the writing have to work (we are probably close to that) but we are talking about lifelike video generation with body movement etc. When we are there.. celebrities and stunt doubles should worry.
- Same thing for music (however, GENERATIVE METAL is a thing and its amazing!)
- and video games – e.g., automatic DLC creation or additional levels/missions – what a boost for gaming studios this would be.
- We are going to need to add Boston Dynamics into the mix for this last one – and some of its applications are certainly NSFW. I think Westworld would be the ultimate application of AI on Earth. Whether it will ever happen, who knows. Lifelike in both consciousness and movement. There are countless applications for this – and even the “less pious” applications portrayed in that show such as war/combat or ⚠️..ahem⚠️ would reduce harm, in my opinion, to real humans.