artificial intelligence (AI)
AI-powered apps reshape self-perception for blind users
Artificial intelligence (AI) is providing blind people with unprecedented access to visual feedback about their own appearance, transforming daily routines and self-perception, though experts caution that the technology may have complex psychological effects.
Apps like Be My Eyes and Envision now allow blind users to receive detailed analyses of their faces and bodies through image recognition and AI-powered feedback. The technology can describe skin condition, facial features, and even suggest styling or makeup adjustments, functioning as a “digital mirror” for users who have never been able to see themselves.
Lucy Edwards, a blind content creator, described how AI feedback allows her to understand her appearance after years of relying solely on descriptions from others. “Suddenly we have access to all this information about ourselves, about the world, it changes our lives,” she said.
Experts warn, however, that such tools can inadvertently reinforce unrealistic beauty standards. Helena Lewis-Smith, a body image researcher at the University of Bristol, noted that AI often compares users against idealized Western beauty norms, which could negatively affect mental health, especially for those who cannot cross-check information visually.
Envision CEO Karthik Mahadevan said that while the apps were initially designed for basic tasks like reading text or navigating the world, users increasingly employ them for personal grooming and styling. “Often the first question they ask is how they look,” he said.
AI’s growing role as a personal visual assistant raises both empowerment and risk. Users can control how feedback is provided—whether descriptive, poetic, or evaluative—but inaccuracies and algorithmic biases remain a concern. Some services, such as Aira Explorer, offer human verification of AI descriptions to improve reliability.
Researchers emphasize that body image is multi-dimensional, influenced by context, social comparison, and personal agency—factors that AI cannot fully capture. Yet for many blind users, the technology offers newfound independence and self-understanding. Edwards said: “Even though we don’t see visual beauty in the same way sighted people do, AI allows us to experience aspects of ourselves we thought we’d lost.”
As AI continues to expand into daily life, specialists call for careful study of its emotional and psychological impact on blind communities, balancing empowerment with awareness of potential harms.
With inputs from BBC
19 hours ago
Cheapfakes continue to remain a problem, despite rise of deepfakes: US expert
Dr Heather Ashby, a US foreign policy and national security expert, on Tuesday said cheapfakes continue to remain a problem despite the rise of deepfakes.
She said cheapfake is a form of manipulated media where video, audio and images are altered using relatively simple and low-cost editing tools while deepfake is a form of synthetic media where video, audio and images are manipulated using artificial intelligence (AI).
The US expert was sharing how technology impacts elections at an event titled "Leveraging Technology and AI for Accurate Foreign Affairs and Election Reporting" at the EMK Center on Tuesday. The event was jointly organised by the US Embassy in Dhaka and the Diplomatic Correspondents Association, Bangladesh (DCAB).
Spokesperson at the US Embassy in Dhaka Asha C. Beh and DCAB President Nurul Islam Hasib also spoke at the event.
On cheapfakes, Ashby said that Chinese foreign ministry spokesperson Lijjan Zhao posted a fabricated image of an Australian soldier with a bloody knife next to a child in late 2020.
"Days after the 2020 US presidential election, videos circulated on social media purporting to show election workers engaging in voting fraud. The misleading video circulated on Twitter gathering attention from users and serving as doctored evidence that was fraud during the election. Local law enforcement investigated the location in the video to prove that it was false," she said as examples of cheapfakes.
Ashby, whose work focuses on the intersection of national security, AI, and election integrity, said AI generated images and videos are also used for satire and parody.
"Numerous deepfakes have circulated in the US presidential election which are clearly fake images used for humour," she said while giving examples.
There are tools to identity deepfakes and cheapfakes.
The most sophisticated tools in the domain of private companies and governments include sensity AI, Content Authenticity Initiative, Hugging Face, Deep Media, Deepfake-o-Meter, Reality Defender, and TrueMedia, she said.
Replying to a question on how AI is being used in foreign policy practices, she said, "What I have noticed with the use of AI, AI works best if you have a problem or a challenge you're trying to identify that AI can then help with."
"In terms of AI and national security within the US, the US government, particularly in the security area, has been using AI a lot longer than what we are aware of with ChatGPT's release in late 2022, mainly because they process a lot of data and it's not possible for an individual to go through that data," said the US expert.
So instead of just having a software programme, she said, it makes it easier for them to bring various data points together to look for anomalies that may say that a terrorist attack is happening, for example.
"Or if you go to the Department of Homeland Security's website, they provide insight into the various ways that they're using AI within their law enforcement security operations, as well as within Department of Homeland Security’s emergency response, so the Federal Emergency Management Agency, if disaster strikes, they respond to it, and so they're using AI within employees," she said.
1 year ago