Member
- Messages
- 89
Google is facing backlash after its slick Gemini AI video demo turned out to rely heavily on human guidance and editing. While showcasing apparent real-time visual and verbal interactions, the AI was just receiving descriptive text prompts about video stills without live analysis.
Despite portrayals of seamless multimodal processing, Gemini functioned similarly to ChatGPT, text in/text out. Google defended being selective to "inspire developers," but burying disclaimers and creating misleading hype has left many questioning if the deception was necessary.
What do you think - did Google have to fake it to this extent? Or could transparency about current AI limitations still excite progress? Does this undermine trust in AI demos?
Despite portrayals of seamless multimodal processing, Gemini functioned similarly to ChatGPT, text in/text out. Google defended being selective to "inspire developers," but burying disclaimers and creating misleading hype has left many questioning if the deception was necessary.
What do you think - did Google have to fake it to this extent? Or could transparency about current AI limitations still excite progress? Does this undermine trust in AI demos?