I was helping build an AI powered product that risked giving hallucinations to the user. From the company, there were no written plans to build any method to check or prevent hallucinated responses. I believe this doesn’t represent creativity and critical thinking very well, because a product should be a reliable experience.
I brought up my concern, and, suprisingly, I got heavy pushback. The assumption was that the AI would magically give the most accurate information with the prompt provided. This showed me a lack of knowledge in how AI technology fundamentally worked, and a lack in the current day awareness of hallucinated responses. I knew then that the product would encounter reliability issues because of the hallucinated responses.
Don’t get me wrong, AI technology is seemingly getting more and more magical and impressive. Generate a video clip or image from a short phrase? Awesome! Generate an AI voice that sounds so close to the original person? Impressive! Give long dissertations about some complex topic? Mind-blowing!
But let’s not forget how AI technology essentially works. It converts a large set of videos, images, and text into numbers or mathematical representations, and it detects simple and complex patterns in inputs. By detecting patterns, it could convincingly setup imitations. Spoken languages, images, and videos all have patterns. Once a model is properly trained and tuned, it’s no wonder that AI technology could output what it does today because it’s just a huge analysis of patterns.
So seeing how some people hype up AI technology is plain annoying and bothersome. Eliminate all mid-level engineers? I feel so sorry for the overworked senior engineers and inexperienced engineers. Let AI technology auto-generate programming code without double checking? I shudder thinking about potential code debt in that repo. Let AI technology be legal counsel? I hope that person has enough money for legal bills.
This hyped up nature of AI technology is reminding me of a couple other technology sets: namely blockchain and quantum computing. Again, don’t get me wrong, these technologies are just as impressive as AI technology. Cryptographically link transactions together so that there is a decentrailized system? I’m in. Explode computing power with qubits? Let’s try and get it done.
But these technologies have been around and or rumored for a very long time. For blockchain technology, there’s been groups evangelizing decentralized identity for several years, and they don’t seem to be getting the traction that they think they should be getting. Bitcoin has turned into an enormous virtual casino, and it represents more gambling that it does an actual reliable monetary system. Quantum computing has been trying some reliable system for a long time, and people have dreamt of getting some kind of consumer grade quantum computer. Several companies do have quantum computers, but is it doing the bulk of their work? No. Microsoft recently promised some kind of consumer grade quantum chip. I’ll wait until I see it to believe it.
Now the previously mentioned technologies can have use cases. Blockchain could be good for a monetary system, shipping ledgers, identification, and many other things. Quantum computing can launch processing power once it comes. The problem I’m seeing is the hyping of the technology, and humans are so hardwired to find that hit of dopamine of something new and exciting.
So AI technology is being hyped by some groups, and these “hypers” are leading people down to some wonder land. When hype becomes the predominant emotion, the purpose and utility of the technology gets lost. I hear of sad stories (my personal one included), of misguided people basing entire businesses off of some silly notion that AI technology is going to make life so much easier that so much work doesn’t need to be done. It’s almost like a get-rich-quick scheme, and I don’t like those because the work seems inauthentic and the person comes off as lazy.
As my opinion, I believe the best use case for AI technology is to facilitate faster development of whatever it is the person is working on. It’s great when AI suggests functions or autofills my code in. Additionally, AI is a fantastic tool for critiquing grammar or written content. But I don’t let it do the work for me, because the quality of my work can suffer if I’m not vigilant over AI-generated content. I feel more alive when I put the work in for creativity and critical thinking content that I produce. I hope so much that the hype around the technology gets toned down in order to realize how it can really facilitate every day work tasks.
I recently found some articles to show how AI needs to be questioned and evaluated a bit more.
Microsoft CEO Admits That AI Is Generating Basically No Value