In a thought-provoking TED Talk, computer scientist Jennifer Golbeck explores the current state of artificial intelligence (AI) and whether we are truly advancing or have hit a plateau. She addresses the hype surrounding AI, particularly the fears about artificial general intelligence (AGI) and its implications for humanity.
Key Takeaways
- AI has made significant strides in specific tasks but may not be close to achieving AGI.
- Concerns about AI often stem from sensationalised narratives rather than actual technological capabilities.
- The reliability of AI systems remains a major challenge, with issues like bias and hallucination.
- The future of AI depends on improving technology and addressing ethical concerns.
The Current State Of AI
AI has come a long way, especially in performing specific tasks. We’ve seen AI beat chess masters and assist in various fields. However, the conversation has shifted towards the idea of AGI, which is AI that can perform any intellectual task that a human can do. This raises questions about whether we are nearing this level of intelligence.
Many tech leaders have warned that the AI we are developing is so powerful that it poses a threat to humanity. This is unusual; typically, new technologies are not described as existential threats. So, why the sudden alarm?
Reasons Behind The Alarm
- Profit Motive: If a technology is deemed powerful enough to destroy civilization, companies can profit immensely before any regulations come into play. This creates a compelling narrative for investors.
- Cinematic Influence: The idea of AI surpassing human intelligence is a popular theme in movies, which can distract from real issues we face today with AI.
Real Issues With AI
While we ponder the future of AGI, we often overlook pressing problems caused by current AI technologies. For instance, deepfakes and biased decision-making in criminal justice are significant concerns. These issues require our immediate attention rather than speculative fears about AGI.
Are We Close To AGI?
Some believe we are on the brink of achieving AGI, with figures like Elon Musk claiming it could happen within a year. However, the reality is that many AI tools still struggle with basic tasks. For example, Google’s AI search tool recently provided nonsensical answers, highlighting that we are far from achieving reliable AI.
The Challenge Of Reliability
The main challenge we face with AI is reliability. Many algorithms make frequent mistakes. For instance, when using ChatGPT for summarising discussions, I often find myself correcting its errors. This unreliability is a significant barrier to trusting AI for critical tasks.
The Problem Of Hallucination
AI systems often generate false information, a phenomenon known as "AI hallucination." This occurs when AI creates responses that sound plausible but are entirely fabricated. For example, when I asked ChatGPT about violent threats, it invented responses that had no basis in reality. This is a serious issue that needs addressing if we want AI to be taken seriously.
The Need For Better Data
To improve AI, we need vast amounts of quality data. However, much of the reliable data has already been used, and the AI often learns from low-quality content available online. This could lead to a downward spiral where AI continues to produce poor-quality outputs.
The Economic Viability Of AI
Investments in generative AI have skyrocketed, but the returns have been minimal. Companies are spending billions, yet the revenue generated is not sustainable. This raises questions about whether the technology will ever be valuable enough to justify the costs involved.
Job Displacement Concerns
Many fear that AI will take over jobs, but the reality is more nuanced. For instance, if a company employs two software engineers and uses AI to double their productivity, they might choose to keep both engineers rather than lay one off. However, if AI becomes too expensive, companies may opt for cheaper alternatives, which could lead to job losses.
Addressing Bias In AI
One of the most pressing issues is the bias inherent in AI systems. AI learns from human data, which means it can adopt our biases. Attempts to mitigate this bias often lead to more problems, as seen with Google’s image generation tool that failed to produce accurate results despite safeguards.
The Human Element
Ultimately, human intelligence is not just about productivity. It encompasses emotional responses, creativity, and the ability to connect with others. AI may mimic these traits but will never truly replicate them. This is why I’m not overly concerned about AI taking over humanity. If things go awry, we can always turn it off.
In conclusion, while AI has made impressive strides, we must focus on the real challenges it presents today rather than getting lost in the hype about its future potential. The conversation should centre on improving reliability, addressing bias, and ensuring that AI serves humanity positively.