Amazon-sponsored artwork that ‘learns’ peoples’ emotion debuts at Smithsonian

The sculpture titled “me+you” interprets human emotion as coloured patterns

The artwork was sponsored by Amazon web services

The Smithsonian exhibition is featuring a sculpture by Suchi reddy, sponsored by Amazon Web Services that employs artificial intelligence to integrate viewers’ feelings into the artwork. “me + you” is the title of the sculpture. It hears what you have to say about the future and renders your words into a display of coloured lights and patterns. Amazon donated 1,200 hours of programming to assist the artwork in translating speech to text and even analyzing the mood represented in a speaker’s voice. The artwork was the focal point of a new exhibit at the Smithsonian Arts and Industries Building, which opened to the public for the first time in 20 years.

The words and their meanings are then reinterpreted as a pattern of colourful lights in the sculpture. Positive emotions, at their most fundamental, tend to convert into calming blue, green, and purple hues. Words that imply anger may set off a chain reaction of hues on the opposite side of the colour wheel. The lights will turn red if you say a swear word.

No matter the sentiment, Reddy said, “I want to show all human emotion as beautiful.”

And, as artificial intelligence progresses, the interpretations will develop and become more nuanced. Swami Sivasubramanian, vice president of Amazon Machine Learning at Amazon Web Services, explained that the artwork includes sentiment analysis, which not only decodes the meaning of words but also the speaker’s mood behind the words.

A companion website allows individuals to enter their thoughts through the internet and obtain a visual depiction of their feelings, which is then added to the archive. In an era of widespread scepticism about Big Tech’s data collection, Reddy and her colleagues were cautious not to collect any data other than people’s predictions for the future. According to Reddy, no video is being captured, and nothing is tracking people’s expressions back to them.

Related posts

Leave a Comment