Events
Jun 18, 2024 - Georg Zoeller
Podcast: Jane 'Ms Cyberpenny' Lo interviews Georg Zoeller
Cybersecurity Podcaster Jane 'Ms. Cyberpenny' Lo interviewed C4AIL Co-Founder Georg Zoeller on the sidelines of SuperAI, covering AI Fomo, Challenges, Opportunities and the AI Literacy Gap
Topic Timestamps
00:44 - FOMO in AI adoption? What are some concerns?
01:34 - AI's early stage can lead to overblown claims
03:01 - Competitive market pressures? some observations and experiences
03:39 - Incentive system shapes product development
03:48 - One example: reward systems shape gameplay
04:21 - KPIs run big tech
04:40 - Measurable KPIs can blind us to societal impact (e.g. poritizes hyper-growth over long-term well-being).
05:00 - Both games and AI development are data-driven: build, measure, optimize
05:31 - Decentralized GPUs & open-source AI - impacts on market dynamics?
06:06 - Pragmatic approach to AI: Like gunpower, once the knowledge is out, there's no putting AI genie back in the bottle
06:58 - Energy-intensive AI models conflict with sustainability goals
07:41 - Addressing AI's energy demands with blockchain seems counterintuitive, given its own high energy footprint
08:00 - Societal Harms - Are there technological solutions?
08:35 - Big tech's claims of safe, closed AI models suggests distrust of public data and a grab for control
08:52 - LLMs like GPT-4 offer information access similar to Google Search. Shouldn't potential misuse regulations apply equally to both?
09:17 - Current narrative suggests pumping AI with compute can help solve complex problems
10:05 - The dominant narrative surrounding AI & Web3 emphasizes a future-oriented justification for rapid adoption
10:23 - Fueled by inflated valuations, these narratives create a host of problems – rom security vulnerabilities to job security anxieties
10:46 - AI Literacy Gap - what are some AI limitations?
11:11 - No.1 limitation: not delivering value right now - though the technology is a quantum leap
12:59 - One challenge is the breakneck pace of AI development. Solutions built today maybe outdated by launch (e.g. edge models, LaMDA, vision transformers)
13:48 - Given such "technological explosion", waiting can sometimes be a wise
14:15 - Rush to adoption could be due to various factors (e.g. free cloud training, or consultancies prioritizing implementation)
15:54 - we can leverage a pool of experts to create a reliable knowledge base
16:28 - Key takeaways - proactive steps ordinary users can take?
16:36 - AI is a new and transformative capability, akin to gunpowder or the printing press
17:05 - View AI as a "core" with "scaffolding" (custom software) to mitigate risks (e.g. security, misinformation)
17:39 - Understanding of AI core (not necessarily deep dives) is crucial to future-proof careers
18:13 - AI adoption is unavoidable. Governments must embrace it, AI leaders need to step up. The old playbook is obsolete
18:40 - One example - apply first principle thinking to "digital agents" - require 99.9% accuracy in each step to automate. LLM error rates (10%) remain high despite advancements. This impedes agent adoption requiring 99.9% reliability
20:11 - First principles understanding help to separate the hype from reality
20:50 - For the past 20 years, we've relied heavily on tech evangelists to guide technology adoption - which often prioritizes technology over impacts eg jobs disruption
22:24 - AI adoption needs diverse leadership for balanced decisions encompassing social, economic, and company values