In a new study, researchers have developed a tool that will enable visually impaired people who find it difficult to understand memes, to enjoy them. Researchers at Carnegie Mellon University have developed a tool to automatically identify memes and apply prewritten templates to add descriptive alt text, making them intelligible via existing assistive technologies. Memes largely live within social media platforms that have barriers to adding alt text. Twitter, for example, allows people to add alt text to their images, but that feature isn't always easy to find. Of 9 million tweets the CMU researchers examined, one million included images and, of those, just 0.1 per cent included alt text. Gleason said basic computer vision techniques make it possible to describe the images underlying each meme, whether it be a celebrity, a crying baby, a cartoon character or a scene such as a bus upended in a sinkhole. Optical character recognition techniques are used to decipher the overlaid text, which can change with each iteration of the meme, reported the study published in - Access. For each meme type, it's only necessary to make one template describing the image, and the overlaid text can be added for each iteration of that meme. But writing out what the meme is intended to convey proved difficult. The team also created a platform to translate memes into sound rather than text. Users search through a sound library and drag and drop elements into a template. This system was made to translate existing memes and convey the sentiment through music and sound effects.