AI tool
Inclusive AI can drive progress: Nahar Khan tells Global South Media Forum in Kunming
Highlighting the importance of investing in local-language AI (Artificial intelligence) models, UNB Executive Editor Nahar Khan on Sunday said AI can be a tool for progress, but only if people wield it with integrity, inclusivity and foresight.
"The future of news in the Global South will not be written by algorithms alone. It will be written by us, by our choices, by our courage, and by our commitment to truth," she said.
Nahar, also Executive Director of Cosmos Foundation, made the remarks while speaking at a forum titled "The Rise of the Global South: Economic Development and Cultural Confidence" held at the "Global South Media and Think Tank Forum-2025."
She said the Global South has long been spoken about, too rarely spoken for. "AI offers us a chance to change that - but only if we act collectively."
"Let us ensure that our audiences, our languages, and our stories are not left behind in this transformation," Nahar said.
The five-day forum, which opened Saturday, is co-hosted by Xinhua News Agency, the Communist Party of China Yunnan Provincial Committee and the People's Government of Yunnan Province.
Freddy Alfred Niáñez Contreras, Vice President of the Council of Ministers and Minister of Information of Venezuela (video message); Liu Gang, Director-general of Xinhua Institute; Kubatbek Rakhimov, CEO of the Public Foundation Applikata Center for Strategic Solutions of Kyrgyzstan; Wen Jian, Director of the Communication Strategy Center, Xinhua Institute; Erika Hoffmann Jauge, President of the National Public Media of Uruguay; Shakil Ahmed, Chief Executive Officer, Asian Institute of Eco-Civilization Research and Development of Pakistan; Sun Ming, Vice President of the Academy of Contemporary China and World Studies; Li Yuefen, South Centre G20 Sherpa and Senior Advisor on South-South Cooperation and Development Finance; Merthold Macfalle Monyae, Director of the Centre for Africa-China Studies of University of Johannesburg; Hamed Vafaei, Director of the Asian Studies Research Center at the University of Tehran; Selçuk Colakoglu, Founding Director of the Turkish Center for Asia-Pacific Studies; Timofey V. Expert at the Global Fact-Checking Network of Russia; Hou Sheng, Vice President of the Yunnan Academy of Social Sciences, and Moderator: Liu Hua, Director of Communications and Public Affairs, Xinhua Institute, also spoke at the discussion.
Challenge of Voice and Trust
In a rapidly changing information ecosystem, Nahar said, the true test lies in how their voices and perspectives are acknowledged, accurately represented, and valued on the global stage.
At the heart of this is trust, she said, news is only as strong as the credibility it carries, and in the age of AI, that trust is under both great pressure and great possibility.
More and more audiences across the Global South are consuming news in new ways - through platforms, short videos, and increasingly AI-driven feeds and interfaces, Nahar observed.
2 months ago
How to Use ChatGPT and Other AI Tools to Improve Your IELTS Writing and Speaking Preparation
Preparing for the IELTS exam is a challenging yet rewarding journey. With the rise of AI tools like ChatGPT, learners now have a powerful resource to enhance their writing and speaking skills through personalised, on-demand practice and feedback. Let's find out how ChatGPT and other AI tools can help you make solid preparations for the writing and speaking sections of the IELTS exam.
How to Get Desired IELTS Score in Writing & Speaking Utilising ChatGPT or Other AI Apps
.
Generate Writing Topics On-Demand
One of the most effective ways to improve your IELTS writing skills is by consistently practising with new topics. ChatGPT can instantly generate IELTS Writing Task 1 and Task 2 prompts.
You simply need to give a command such as, “Give me a Task 2 essay topic on education,” and within seconds, you will have a practice prompt that mirrors the structure of real IELTS questions. This makes it easy to maintain variety in your writing and ensures you are exposed to a wide range of question types, from opinion to problem-solution essays.
Read more: How to Use ChatGPT and Other AI Tools to Improve Your IELTS Writing and Speaking Preparation
Analyze and Rewrite Your Essays
After completing a task response, you can request ChatGPT or any kind of similar AI app to review it. For example, using a prompt such as “Can you give feedback on this IELTS Task 2 essay?” along with your essay allows the chatbot to assess your grammar, vocabulary, coherence, and overall structure.
It will then break down your writing into the four IELTS scoring criteria—Task Response, Coherence and Cohesion, Lexical Resource, and Grammatical Range and Accuracy—and even give you a band estimate. You can also ask ChatGPT to rewrite your essay in a more academic or formal tone, helping you learn how to polish your writing.
Practice Academic Writing Style
Many test takers struggle with maintaining the required formal tone in Writing Task 2. ChatGPT can help you rewrite informal sentences into a more suitable academic style. For example, you might paste a paragraph and say, “Make this more formal and academic,” and it will return a refined version. This repeated exposure to academic phrasing trains your brain to automatically choose more formal constructions when writing under pressure.
Simulate IELTS Speaking Part Practice
ChatGPT is capable of simulating IELTS Speaking Parts 1, 2, and 3 by providing realistic prompts that resemble those used by actual examiners. Ask something like, “Pretend you are an IELTS examiner. Ask me Part 1 speaking questions,” and it will provide one question at a time, just like the real test.
Read more: IELTS for UKVI Explained: Key Insights for Bangladeshi Candidates
For Part 2, it can generate cue cards and even give you feedback on your response if you choose to type it out afterward. Practising in this format can help reduce anxiety, improve your fluency, and train you to think on your feet.
Improve Fluency Through Interactive Conversations
To sound more fluent and natural during the Speaking test, you need regular practice in forming complex ideas and expressing them clearly. ChatGPT can hold extended conversations with you on various IELTS-friendly topics like environment, education, technology, and culture.
This back-and-forth dialogue sharpens your ability to expand answers, use linking devices, and apply topic-specific vocabulary. While ChatGPT can not listen to your pronunciation, typing your answers and receiving written feedback still helps you develop rhythm and cohesion in speech.
Get Grammar and Vocabulary Feedback
One of the most frustrating aspects of self-studying IELTS is identifying your grammar and vocabulary weaknesses. With ChatGPT, you can paste any piece of writing or a sample speaking response and ask, “Please highlight grammar mistakes and suggest better word choices.”
Read more: How to Register for IELTS Exam in Bangladesh
It will analyse your work line by line, correcting errors and suggesting more advanced or suitable alternatives. You can even request vocabulary lists for specific topics or ask how to paraphrase certain phrases more effectively, helping you master the lexical resource band criteria.
Train ChatGPT to Evaluate Writing with Model Answers
You can use a more advanced method by “training” ChatGPT to assess writing against model answers. Provide five sample essays rated from Band 5 to Band 9, then include your own essay.
Instruct ChatGPT to compare your answer to the model responses and provide a full-band score evaluation based on IELTS criteria. This method is particularly helpful because it aligns ChatGPT’s feedback with the actual scoring system used by IELTS examiners, making the evaluation much more reliable.
Limitations of Using ChatGPT and Other AI Tools for IELTS Preparation
.
Not a Certified IELTS Examiner
While ChatGPT can provide helpful evaluations and corrections, it is not a certified IELTS examiner. This means the feedback it gives—especially band score predictions—may not always be accurate. Its understanding of IELTS criteria is broad but not specific to the training that human examiners undergo. Therefore, its scoring should be viewed as approximate, not official.
Read more: Free Websites for Online IELTS Mock Tests
Lacks Clear Opinion and Argumentation
A major limitation of ChatGPT in essay writing is its tendency to maintain a neutral tone. It is designed to present balanced viewpoints without taking a firm stand. However, in many IELTS Writing Task 2 essays, particularly those that ask for an opinion, test-takers are expected to take a clear position and support it with arguments. Overly vague or generic answers can lower your score for Task Response and reduce clarity.
No Personal Experience in Speaking Responses
Another major drawback is ChatGPT’s inability to reference personal experience. In the IELTS Speaking section, drawing from your real-life stories and opinions helps make your responses more engaging and authentic. Since ChatGPT and similar kinds of AI apps can only provide general or hypothetical examples, its answers often lack the depth and individuality that examiners look for.
Does Not Evaluate Pronunciation
Although ChatGPT can assess grammar and vocabulary through written responses, it cannot listen to your speech. This means it cannot evaluate crucial aspects like pronunciation, stress, or intonation—key components of the IELTS Speaking score.
Information May Be Inaccurate or Generic
Lastly, ChatGPT is not affiliated with IELTS, and its suggestions should be cross-checked with official resources. Always verify content accuracy and avoid using it as your only source of preparation. AI serves as a supplementary tool—not a substitute for official materials or expert advice.
Read more: IELTS Practice Tests: 7 Full Free IELTS Mock Tests to Take Online
In a Nutshell
ChatGPT and other AI tools can be a powerful ally in your IELTS Writing and Speaking preparation, offering flexible, instant practice and helpful insights. From generating topics to refining grammar and vocabulary, it adds great value to self-study. However, it is essential to balance AI support with official IELTS materials, expert feedback, and personal effort. While ChatGPT enhances learning, it should complement—not replace—structured preparation from certified sources and real-world language practice.
5 months ago
Not magic: Opaque AI tool may flag parents with disabilities
For the two weeks that the Hackneys’ baby girl lay in a Pittsburgh hospital bed weak from dehydration, her parents rarely left her side, sometimes sleeping on the fold-out sofa in the room.
They stayed with their daughter around the clock when she was moved to a rehab center to regain her strength. Finally, the 8-month-old stopped batting away her bottles and started putting on weight again.
“She was doing well and we started to ask when can she go home,” Lauren Hackney said. “And then from that moment on, at the time, they completely stonewalled us and never said anything.”
The couple was stunned when child welfare officials showed up, told them they were negligent and took their daughter away.
“They had custody papers and they took her right there and then,” Lauren Hackney recalled. “And we started crying.”
More than a year later, their daughter, now 2, remains in foster care. The Hackneys, who have developmental disabilities, are struggling to understand how taking their daughter to the hospital when she refused to eat could be seen as so neglectful that she’d need to be taken from her home.
They wonder if an artificial intelligence tool that the Allegheny County Department of Human Services uses to predict which children could be at risk of harm singled them out because of their disabilities.
Also Read: ChatGPT by Open AI: All you need to know
The U.S. Justice Department is asking the same question. The agency is investigating the county’s child welfare system to determine whether its use of the influential algorithm discriminates against people with disabilities or other protected groups, The Associated Press has learned. Later this month, federal civil rights attorneys will interview the Hackneys and Andrew Hackney’s mother, Cynde Hackney-Fierro, the grandmother said.
Lauren Hackney has attention-deficit hyperactivity disorder that affects her memory, and her husband, Andrew, has a comprehension disorder and nerve damage from a stroke suffered in his 20s. Their baby girl was just 7 months old when she began refusing to drink her bottles. Facing a nationwide shortage of formula, they traveled from Pennsylvania to West Virginia looking for some and were forced to change brands. The baby didn’t seem to like it.
Her pediatrician first reassured them that babies sometimes can be fickle with feeding and offered ideas to help her get back her appetite, they said.
When she grew lethargic days later, they said, the same doctor told them to take her to the emergency room. The Hackneys believe medical staff alerted child protective services after they showed up with a baby who was dehydrated and malnourished.
That’s when they believe their information was fed into the Allegheny Family Screening Tool, which county officials say is standard procedure for neglect allegations. Soon, a social worker appeared to question them, and their daughter was sent to foster care.
Over the past six years, Allegheny County has served as a real-world laboratory for testing AI-driven child welfare tools that crunch reams of data about local families to try to predict which children are likely to face danger in their homes. Today, child welfare agencies in at least 26 states and Washington, D.C., have considered using algorithmic tools, and jurisdictions in at least 11 have deployed them, according to the American Civil Liberties Union.
The Hackneys’ story — based on interviews, internal emails and legal documents — illustrates the opacity surrounding these algorithms. Even as they fight to regain custody of their daughter, they can’t question the “risk score” Allegheny County’s tool may have assigned to her case because officials won’t disclose it to them. And neither the county nor the people who built the tool have ever explained which variables may have been used to measure the Hackneys’ abilities as parents.
“It’s like you have an issue with someone who has a disability,” Andrew Hackney said in an interview from their apartment in suburban Pittsburgh. “In that case … you probably end up going after everyone who has kids and has a disability.”
As part of a yearlong investigation, the AP obtained the data points underpinning several algorithms deployed by child welfare agencies, including some marked “CONFIDENTIAL,” offering rare insight into the mechanics driving these emerging technologies. Among the factors they have used to calculate a family’s risk, whether outright or by proxy: race, poverty rates, disability status and family size. They include whether a mother smoked before she was pregnant and whether a family had previous child abuse or neglect complaints.
What they measure matters. A recent analysis by ACLU researchers found that when Allegheny's algorithm flagged people who accessed county services for mental health and other behavioral health programs, that could add up to three points to a child’s risk score, a significant increase on a scale of 20.
Allegheny County spokesman Mark Bertolet declined to address the Hackney case and did not answer detailed questions about the status of the federal probe or critiques of the data powering the tool, including by the ACLU.
“As a matter of policy, we do not comment on lawsuits or legal matters,” Bertolet said in an email.
Justice Department spokeswoman Aryele Bradford declined to comment.
NOT MAGIC
Child welfare algorithms plug vast amounts of public data about local families into complex statistical models to calculate what they call a risk score. The number that’s generated is then used to advise social workers as they decide which families should be investigated, or which families need additional attention — a weighty decision that can sometimes mean life or death.
A number of local leaders have tapped into AI technology while under pressure to make systemic changes, such as in Oregon during a foster care crisis and in Los Angeles County after a series of high-profile child deaths in one of the nation’s largest county child welfare systems.
LA County’s Department of Children and Family Services Director Brandon Nichols says algorithms can help identify high-risk families and improve outcomes in a deeply strained system. Yet he could not explain how the screening tool his agency uses works.
“We’re sort of the social work side of the house, not the IT side of the house,” Nichols said in an interview. “How the algorithm functions, in some ways is, I don’t want to say is magic to us, but it’s beyond our expertise and experience.”
Nichols and officials at two other child welfare agencies referred detailed questions about their AI tools to the outside developers who created them.
In Larimer County, Colorado, one official acknowledged she didn’t know what variables were used to assess local families.
“The variables and weights used by the Larimer Decision Aide Tool are part of the code developed by Auckland and thus we do not have this level of detail,” Jill Maasch, a Larimer County Human Services spokeswoman said in an email, referring to the developers.
In Pennsylvania, California and Colorado, county officials have opened up their data systems to the two academic developers who select data points to build their algorithms. Rhema Vaithianathan, a professor of health economics at New Zealand’s Auckland University of Technology, and Emily Putnam-Hornstein, a professor at the University of North Carolina at Chapel Hill’s School of Social Work, said in an email that their work is transparent and that they make their computer models public.
“In each jurisdiction in which a model has been fully implemented we have released a description of fields that were used to build the tool, along with information as to the methods used,” they said by email.
A 241-page report on the Allegheny County website includes pages of coded variables and statistical calculations.
Vaithianathan and Putnam-Hornstein’s work has been hailed in reports published by UNICEF and the Biden administration alike for devising computer models that promise to lighten caseworkers’ loads by drawing from a set of simple factors. They have described using such tools as a moral imperative, insisting that child welfare officials should draw from all data at their disposal to make sure children aren’t maltreated.
Through tracking their work across the country, however, the AP found their tools can set families up for separation by rating their risk based on personal characteristics they cannot change or control, such as race or disability, rather than just their actions as parents.
In Allegheny County, a sprawling county of 1.2 million near the Ohio border, the algorithm has accessed an array of external data, including jail, juvenile probation, Medicaid, welfare, health and birth records, all held in a vast countywide “data warehouse.” The tool uses that information to predict the risk that a child will be placed in foster care two years after a family is first investigated.
County officials have told the AP they’re proud of their cutting-edge approach, and even expanded their work to build another algorithm focused on newborns. They have said they monitor their risk scoring tool closely and update it over time, including removing variables such as welfare benefits and birth records.
Vaithianathan and Putnam-Hornstein declined the AP’s repeated interview requests to discuss how they choose the specific data that powers their models. But in a 2017 report, they detailed the methods used to build the first version of Allegheny’s tool, including a footnote that described a statistical cutoff as “rather arbitrary but based on trial and error.”
“This footnote refers to our exploration of more than 800 features from Allegheny’s data warehouse more than five years ago,” the developers said by email.
That approach is borne out in their design choices, which differ from county to county.
In the same 2017 report, the developers acknowledged that using race data didn’t substantively improve the model’s accuracy, but they continued to study it in Douglas County, Colorado, though they ultimately opted against including it in that model. To address community concerns that a tool could harden racial bias in Los Angeles County, the developers excluded people’s criminal history, ZIP code and geographic indicators, but have continued to use those data points in the Pittsburgh area.
When asked about the inconsistencies, the developers pointed to their published methodology documents.
“We detail various metrics used to assess accuracy — while also detailing ‘external validations,’” the developers said via email.
When Oregon’s Department of Human Services built an algorithm inspired by Allegheny’s, it factored in a child’s race as it predicted a family’s risk, and also applied a “fairness correction” to mitigate racial bias. Last June, the tool was dropped entirely due to equity concerns after an AP investigation in April revealed potential racial bias in such tools.
Justice Department attorneys cited the same AP story last fall when federal civil rights attorneys started inquiring about additional discrimination concerns in Allegheny’s tool, three sources told the AP. They spoke on the condition of anonymity, saying the Justice Department asked them not to discuss the confidential conversations. Two said they also feared professional retaliation.
IQ TESTS, PARENTING CLASS
With no answers on when they could get their daughter home, the Hackneys’ lawyer in October filed a federal civil rights complaint on their behalf that questioned how the screening tool was used in their case.
Over time, Allegheny’s tool has tracked if members of the family have diagnoses for schizophrenia or mood disorders. It’s also measured if parents or other children in the household have disabilities, by noting whether any family members received Supplemental Security Income, a federal benefit for people with disabilities. The county said that it factors in SSI payments in part because children with disabilities are more likely to be abused or neglected.
The county also said disabilities-aligned data can be “predictive of the outcomes” and it “should come as no surprise that parents with disabilities … may also have a need for additional supports and services.” In an emailed statement, the county added that elsewhere in the country, social workers also draw on data about mental health and other conditions that may affect a parent’s ability to safely care for a child.
The Hackneys have been ordered to take parenting classes and say they have been taxed by all of the child welfare system’s demands, including IQ tests and downtown court hearings.
People with disabilities are overrepresented in the child welfare system, yet there’s no evidence that they harm their children at higher rates, said Traci LaLiberte, a University of Minnesota expert on child welfare and disabilities.
Including data points related to disabilities in an algorithm is problematic because it perpetuates historic biases in the system and it focuses on people’s physiological traits rather than behavior that social workers are brought in to address, LaLiberte said.
The Los Angeles tool weighs if any children in the family have ever gotten special education services, have had prior developmental or mental health referrals or used drugs to treat mental health.
“This is not unique to caseworkers who use this tool; it is common for caseworkers to consider these factors when determining possible supports and services,” the developers said by email.
Before algorithms were in use, the child welfare system had long distrusted parents with disabilities. Into the 1970s, they were regularly sterilized and institutionalized, LaLiberte said. A landmark federal report in 2012 noted parents with psychiatric or intellectual disabilities lost custody of their children as much as 80 percent of the time.
Across the U.S., it’s extremely rare for any child welfare agencies to require disabilities training for social workers, LaLiberte’s research has found. The result: Parents with disabilities are often judged by a system that doesn’t understand how to assess their capacity as caregivers, she said.
The Hackneys experienced this firsthand. When a social worker asked Andrew Hackney how often he fed the baby, he answered literally: two times a day. The worker seemed appalled, he said, and scolded him, saying babies must eat more frequently. He struggled to explain that the girl’s mother, grandmother and aunt also took turns feeding her each day.
FOREVER FLAGGED
Officials in Allegheny County have said that building AI into their processes helps them “make decisions based on as much information as possible,” and noted that the algorithm merely harnesses data social workers can already access.
That can include decades-old records. The Pittsburgh-area tool has tracked whether parents were ever on public benefits or had a history with the criminal justice system — even if they were minors at the time, or if it never resulted in charges or convictions.
The AP found those design choices can stack the deck against people who grew up in poverty, hardening historical inequities that persist in the data, or against people with records in the juvenile or criminal justice systems, long after society has granted redemption. And critics say that algorithms can create a self-fulfilling prophecy by influencing which families are targeted in the first place.
“These predictors have the effect of casting permanent suspicion and offer no means of recourse for families marked by these indicators,” according to the analysis from researchers at the ACLU and the nonprofit Human Rights Data Analysis Group. “They are forever seen as riskier to their children.”
As child welfare algorithms become more common, parents who have experienced social workers’ scrutiny fear the models won’t let them escape their pasts, no matter how old or irrelevant their previous scrapes with the system may have been.
Charity Chandler-Cole, who serves on the Los Angeles County Commission for Children and Families, is one of them. She landed in foster care as a teen after being arrested for shoplifting underwear for her younger sister. Then as an adult, she said, social workers once showed up at her apartment after someone spuriously reported that a grand piano was thrown at her nephew who was living at her home — even though they didn’t own such an instrument.
The local algorithm could tag her for her prior experiences in foster care and juvenile probation, as well as the unfounded child abuse allegation, Chandler-Cole says. She wonders if AI could also properly assess that she was quickly cleared of any maltreatment concerns, or that her nonviolent offense as a teen was legally expunged.
“A lot of these reports lack common sense,” said Chandler-Cole, now the mother of four and CEO of an organization that works with the court system to help children in foster care. “You are automatically putting us in these spaces to be judged with these labels. It just perpetuates additional harm.”
Chandler-Cole’s fellow commissioner Wendy Garen, by contrast, argues “more is better” and that by drawing on all available data, risk scoring tools can help make the agency’s work more thorough and effective.
GLOBAL INFLUENCE
Even as their models have come under scrutiny for their accuracy and fairness, the developers have started new projects with child welfare agencies in Northampton County, Pennsylvania, and Arapahoe County, Colorado. The states of California and Pennsylvania, as well as New Zealand and Chile, have also asked them to do preliminary work.
And as word of their methods has spread in recent years, Vaithianathan has given lectures highlighting screening tools in Colombia and Australia. She also recently advised researchers in Denmark and officials in the United Arab Emirates on how to use technology to target child services.
“Rhema is one of the world leaders and her research can help to shape the debate in Denmark,” a Danish researcher said on LinkedIn last year, regarding Vaithianathan’s advisory role related to a local child welfare tool that was being piloted.
Last year, the U.S. Department of Health and Human Services funded a national study, co-authored by Vaithianathan and Putnam-Hornstein, that concluded that their overall approach in Allegheny could be a model for other places.
HHS’ Administration for Children and Families spokeswoman Debra Johnson declined to say whether the Justice Department’s probe would influence her agency’s future support for an AI-driven approach to child welfare.
Especially as budgets tighten, cash-strapped agencies are desperate to find more efficient ways for social workers to focus on children who truly need protection. At a 2021 panel, Putnam-Hornstein acknowledged that “the overall screen-in rate remained totally flat” in Allegheny since their tool had been implemented.
Meanwhile, foster care and the separation of families can have lifelong developmental consequences for the child.
A 2012 HHS study found 95% of babies who are reported to child welfare agencies go through more than one caregiver and household change during their time in foster care, instability that researchers noted can itself be a form of trauma.
The Hackneys’ daughter already has been placed in two foster homes and has now spent more than half of her short life away from her parents as they try to convince social workers they are worthy.
Meanwhile, they say they're running out of money in the fight for their daughter. With barely enough left for food from Andrew Hackney’s wages at a local grocery store, he had to shut off his monthly cell phone service. They’re struggling to pay for the legal fees and gas money needed to attend appointments required of them.
In February, their daughter was diagnosed with a disorder that can disrupt her sense of taste, according to Andrew Hackney’s lawyer, Robin Frank, who added that the girl has continued to struggle to eat, even in foster care.
All they have for now are twice-weekly visits that last a few hours before she’s taken away again. Lauren Hackney’s voice breaks as she worries her daughter may be adopted and soon forget her own family. They say they yearn to do what many parents take for granted — put their child to sleep at night in her own bed.
“I really want to get my kid back. I miss her, and especially holding her. And of course, I miss that little giggly laugh,” Andrew Hackney said, as his daughter sprang toward him with excitement during a recent visit. “It hurts a lot. You have no idea how bad.”
2 years ago