AI Can Now See, Hear, Talk, Taste, and Act

AI Can Now See, Hear, Talk, Taste, and Act

A recent piece by Psychology Today underscores how AI is transcending the traditional text-based domain and entering a multisensory realm. According to the article, new systems are beginning to “see, hear, talk, taste, and act” — effectively sharing with humans many of the experiential channels once assumed to be uniquely human.

One major shift is sensory extension: AI models are now not just processing text or images, but also modelling flavour, texture, shape-sound associations and multi-modal inputs. The article highlights studies showing machines mimicking cross-modal associations that humans have, such as associating colours with sounds or tastes with shapes. Meanwhile, AI assistants are becoming embedded into physical spaces and devices, so they can act in the world — rather than simply respond to prompts.

But the psychological and ethical dimension is what the author emphasises most. When AI attains sensory and action capabilities, it challenges human agency, perception, and experience. The article warns that relying on AI for interpretation of our sensations or for acting on our behalf may weaken our own capacities to think, feel, desire and act authentically.

Finally, the piece offers a framework for preserving human agency — through Awareness (recognising when we use AI), Appreciation (valuing our own capacities), Acceptance (adapting to the integration of AI) and Accountability (taking responsibility for tool-use). The message is clear: technology may advance, but we must advance our self-understanding and habits of mind in parallel.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.