AI
ChatGPT ‘passed’ BCS exam, according to Science Bee’s experiment
Since it became publicly accessible in November last year, ChatGPT – an AI chatbot created by OpenAI Company – has dominated the discourse on the internet and social media. Based on the Generative Pre-Trained Transformer 3 or GPT-3 language paradigm, ChatGPT is capable of carrying on a conversation, responding to inquiries, producing stories, poems, and comics, as well as resolving challenging programming issues.
ChatGPT has also participated in and even passed numerous challenging examinations across the globe including the Wharton MBA Exam, the American Medical Licensing Exam, and the Law School Exam, as part of esperiment.
Although the chatbot recently failed the Indian UPSC (Union Public Service Commission) exam, which is the benchmark test for recruitment to higher civil services of the Government of India, Bangladeshi netizens wondered whether ChatGPT would be able to pass the BCS (Bangladesh Civil Service) exam or not.
Science Bee, one of the largest science-based education platforms for youths in the country, has recently revealed on its social media platforms that ChatGPT has “successfully passed” the BCS preliminary exam, scoring 130 out of 200 marks in total.
Read More: Top 5 AI Chatbot Platforms and Trends in 2023
Talking about the experiment with UNB, Science Bee Founder Mobin Sikder and Executive Member Metheela Farzana Melody shared how the team tested the chatbot for BCS exam, following a month of planning and preparation and seven days of frequent testing.
“First of all, we researched how to take the test to get the most realistic results,” Mobin told UNB. “Since ChatGPT is trained on a dataset available till September 2021, we decided to conduct the test on the questions of the latest BCS exam – 44th BCS, held in May 2022.”
“After selecting the exam, we collected the question papers and answers. Since the question paper is allowed to be taken away after the exam, securing it did not require much time. The answer sheet is, however, not published directly. So, we prepared the final answer sheet on our own, after multiple testing from various third-party sources,” team Science Bee explained.
Language barrier emerged as a headache during the experiment as BCS exam is conducted in Bangla language and the chatbot is trained in English. It had to be translated into English in order to keep the exam fair.
Read More: Google's AI Chatbot Bard: All You Need to Know
In the 44th BCS, 1 mark was allotted for each question where the candidate got 1 mark for the correct answer, and 0.5 mark was deducted for each wrong answer. However, candidates had the option to skip or not answer any question; in that case, no marks were added or subtracted. The same mark distribution was provided to ChatGPT and at the beginning, it was informed about the MCQ exam and command through text prompt – and it became ready to take the exam.
However, there were some picture-based questions, according to team Science Bee. Since ChatGPT-3 is not multimodal, it cannot read or understand images; hence it was not possible to input those questions, so those were rejected. Besides, it was not possible to translate some questions related to Bangla language and literature into English as it would change the thematic description.
“The total number of such rejected questions was 22. As these are weaknesses of ChatGPT, invalid questions were treated as unanswered and no negative marking was done,” according to team Science Bee.
The remaining 178 questions were asked to ChatGPT with options, and it answered 142 questions correctly. 24 questions were answered incorrectly and while answering the remaining 12, the chatbot stated that the correct answer option was not found. That means the chatbot got 142 marks for as many correct answers, 12 marks were deducted for providing 24 wrong answers, and no marks were deducted or added for no answer. So, as per the 44th BCS exam questions, ChatGPT passed with a total of 130 marks.
Read More: ChatGPT by Open AI: All you need to know
In the 44th BCS exam, a total of 3,50,716 candidates applied and of them, 2,76,760 candidates participated in the preliminary exam. Only 15,708 candidates passed the preliminary exam, according to reports.
“As there is no specific pass mark for BCS and the cut-off mark is not officially released, we were in touch with several candidates who appeared for the 44th BCS exam. According to the information given by them, the cut-off mark in general cadre was 125±. Since ChatGPT secured 130 marks in our test, it can be said that ChatGPT has successfully passed BCS preliminary exam,” team Science Bee told UNB.
Further explaining the performance of the chatbot, Science Bee said that according to the test, ChatGPT was able to answer the questions quite well. However, it was pretty weak in Bangla language and literature category where it answered only 5 out of 35 questions. On the other hand, it performed well in the categories of science, computer or English language and literature. It took a considerable amount of time to answer most of the questions in the mental skills or math categories correctly.
“Besides, many times there have been incidents like getting stuck in the middle of answering. In that case, we had to take the help of ‘Regenerate Response’ to proceed and move forward,” team Science Bee said.
Read More: AI & Future of Jobs: Will Artificial Intelligence or Robots Take Your Job?
The questions for the exam were collected and translated by Metheela. Overall management of the test was conducted by Science Bee’s Content Production Head Annoy Debnath, and the final report was edited by Mobin and Sadia Binte Chowdhury.
“We did this test as part of an interesting experiment and will conduct further tests with other examinations when ChatGPT-4 will be available. The chatbot is learning consistently and becoming powerful every single day, and through this type of test, we want to convey a message to aspiring learners and students that we need to move one step ahead of ChatGPT with our learnings.”
“That means, we need to stop relying on memorising and copy-paste practices because ChatGPT can do it and will be doing it even better with future versions, and also there are other AI projects in the pipeline such as Google’s Bard. It can be a great assistant and companion to humankind, and it will not replace anyone if we can continue to improve our learning. That is the motto of our research, aligned with our motto and tagline ‘learn like never before’. We want people to understand the importance of learning and be skilled in order to make AI useful,” Mobin and team Science Bee told UNB.
(Details of the test can be found on Science Bee's Facebook page and website.)
Read More: How Can Artificial Intelligence Improve Healthcare?
Not magic: Opaque AI tool may flag parents with disabilities
For the two weeks that the Hackneys’ baby girl lay in a Pittsburgh hospital bed weak from dehydration, her parents rarely left her side, sometimes sleeping on the fold-out sofa in the room.
They stayed with their daughter around the clock when she was moved to a rehab center to regain her strength. Finally, the 8-month-old stopped batting away her bottles and started putting on weight again.
“She was doing well and we started to ask when can she go home,” Lauren Hackney said. “And then from that moment on, at the time, they completely stonewalled us and never said anything.”
The couple was stunned when child welfare officials showed up, told them they were negligent and took their daughter away.
“They had custody papers and they took her right there and then,” Lauren Hackney recalled. “And we started crying.”
More than a year later, their daughter, now 2, remains in foster care. The Hackneys, who have developmental disabilities, are struggling to understand how taking their daughter to the hospital when she refused to eat could be seen as so neglectful that she’d need to be taken from her home.
They wonder if an artificial intelligence tool that the Allegheny County Department of Human Services uses to predict which children could be at risk of harm singled them out because of their disabilities.
Also Read: ChatGPT by Open AI: All you need to know
The U.S. Justice Department is asking the same question. The agency is investigating the county’s child welfare system to determine whether its use of the influential algorithm discriminates against people with disabilities or other protected groups, The Associated Press has learned. Later this month, federal civil rights attorneys will interview the Hackneys and Andrew Hackney’s mother, Cynde Hackney-Fierro, the grandmother said.
Lauren Hackney has attention-deficit hyperactivity disorder that affects her memory, and her husband, Andrew, has a comprehension disorder and nerve damage from a stroke suffered in his 20s. Their baby girl was just 7 months old when she began refusing to drink her bottles. Facing a nationwide shortage of formula, they traveled from Pennsylvania to West Virginia looking for some and were forced to change brands. The baby didn’t seem to like it.
Her pediatrician first reassured them that babies sometimes can be fickle with feeding and offered ideas to help her get back her appetite, they said.
When she grew lethargic days later, they said, the same doctor told them to take her to the emergency room. The Hackneys believe medical staff alerted child protective services after they showed up with a baby who was dehydrated and malnourished.
That’s when they believe their information was fed into the Allegheny Family Screening Tool, which county officials say is standard procedure for neglect allegations. Soon, a social worker appeared to question them, and their daughter was sent to foster care.
Over the past six years, Allegheny County has served as a real-world laboratory for testing AI-driven child welfare tools that crunch reams of data about local families to try to predict which children are likely to face danger in their homes. Today, child welfare agencies in at least 26 states and Washington, D.C., have considered using algorithmic tools, and jurisdictions in at least 11 have deployed them, according to the American Civil Liberties Union.
The Hackneys’ story — based on interviews, internal emails and legal documents — illustrates the opacity surrounding these algorithms. Even as they fight to regain custody of their daughter, they can’t question the “risk score” Allegheny County’s tool may have assigned to her case because officials won’t disclose it to them. And neither the county nor the people who built the tool have ever explained which variables may have been used to measure the Hackneys’ abilities as parents.
“It’s like you have an issue with someone who has a disability,” Andrew Hackney said in an interview from their apartment in suburban Pittsburgh. “In that case … you probably end up going after everyone who has kids and has a disability.”
As part of a yearlong investigation, the AP obtained the data points underpinning several algorithms deployed by child welfare agencies, including some marked “CONFIDENTIAL,” offering rare insight into the mechanics driving these emerging technologies. Among the factors they have used to calculate a family’s risk, whether outright or by proxy: race, poverty rates, disability status and family size. They include whether a mother smoked before she was pregnant and whether a family had previous child abuse or neglect complaints.
What they measure matters. A recent analysis by ACLU researchers found that when Allegheny's algorithm flagged people who accessed county services for mental health and other behavioral health programs, that could add up to three points to a child’s risk score, a significant increase on a scale of 20.
Allegheny County spokesman Mark Bertolet declined to address the Hackney case and did not answer detailed questions about the status of the federal probe or critiques of the data powering the tool, including by the ACLU.
“As a matter of policy, we do not comment on lawsuits or legal matters,” Bertolet said in an email.
Justice Department spokeswoman Aryele Bradford declined to comment.
NOT MAGIC
Child welfare algorithms plug vast amounts of public data about local families into complex statistical models to calculate what they call a risk score. The number that’s generated is then used to advise social workers as they decide which families should be investigated, or which families need additional attention — a weighty decision that can sometimes mean life or death.
A number of local leaders have tapped into AI technology while under pressure to make systemic changes, such as in Oregon during a foster care crisis and in Los Angeles County after a series of high-profile child deaths in one of the nation’s largest county child welfare systems.
LA County’s Department of Children and Family Services Director Brandon Nichols says algorithms can help identify high-risk families and improve outcomes in a deeply strained system. Yet he could not explain how the screening tool his agency uses works.
“We’re sort of the social work side of the house, not the IT side of the house,” Nichols said in an interview. “How the algorithm functions, in some ways is, I don’t want to say is magic to us, but it’s beyond our expertise and experience.”
Nichols and officials at two other child welfare agencies referred detailed questions about their AI tools to the outside developers who created them.
In Larimer County, Colorado, one official acknowledged she didn’t know what variables were used to assess local families.
“The variables and weights used by the Larimer Decision Aide Tool are part of the code developed by Auckland and thus we do not have this level of detail,” Jill Maasch, a Larimer County Human Services spokeswoman said in an email, referring to the developers.
In Pennsylvania, California and Colorado, county officials have opened up their data systems to the two academic developers who select data points to build their algorithms. Rhema Vaithianathan, a professor of health economics at New Zealand’s Auckland University of Technology, and Emily Putnam-Hornstein, a professor at the University of North Carolina at Chapel Hill’s School of Social Work, said in an email that their work is transparent and that they make their computer models public.
“In each jurisdiction in which a model has been fully implemented we have released a description of fields that were used to build the tool, along with information as to the methods used,” they said by email.
A 241-page report on the Allegheny County website includes pages of coded variables and statistical calculations.
Vaithianathan and Putnam-Hornstein’s work has been hailed in reports published by UNICEF and the Biden administration alike for devising computer models that promise to lighten caseworkers’ loads by drawing from a set of simple factors. They have described using such tools as a moral imperative, insisting that child welfare officials should draw from all data at their disposal to make sure children aren’t maltreated.
Through tracking their work across the country, however, the AP found their tools can set families up for separation by rating their risk based on personal characteristics they cannot change or control, such as race or disability, rather than just their actions as parents.
In Allegheny County, a sprawling county of 1.2 million near the Ohio border, the algorithm has accessed an array of external data, including jail, juvenile probation, Medicaid, welfare, health and birth records, all held in a vast countywide “data warehouse.” The tool uses that information to predict the risk that a child will be placed in foster care two years after a family is first investigated.
County officials have told the AP they’re proud of their cutting-edge approach, and even expanded their work to build another algorithm focused on newborns. They have said they monitor their risk scoring tool closely and update it over time, including removing variables such as welfare benefits and birth records.
Vaithianathan and Putnam-Hornstein declined the AP’s repeated interview requests to discuss how they choose the specific data that powers their models. But in a 2017 report, they detailed the methods used to build the first version of Allegheny’s tool, including a footnote that described a statistical cutoff as “rather arbitrary but based on trial and error.”
“This footnote refers to our exploration of more than 800 features from Allegheny’s data warehouse more than five years ago,” the developers said by email.
That approach is borne out in their design choices, which differ from county to county.
In the same 2017 report, the developers acknowledged that using race data didn’t substantively improve the model’s accuracy, but they continued to study it in Douglas County, Colorado, though they ultimately opted against including it in that model. To address community concerns that a tool could harden racial bias in Los Angeles County, the developers excluded people’s criminal history, ZIP code and geographic indicators, but have continued to use those data points in the Pittsburgh area.
When asked about the inconsistencies, the developers pointed to their published methodology documents.
“We detail various metrics used to assess accuracy — while also detailing ‘external validations,’” the developers said via email.
When Oregon’s Department of Human Services built an algorithm inspired by Allegheny’s, it factored in a child’s race as it predicted a family’s risk, and also applied a “fairness correction” to mitigate racial bias. Last June, the tool was dropped entirely due to equity concerns after an AP investigation in April revealed potential racial bias in such tools.
Justice Department attorneys cited the same AP story last fall when federal civil rights attorneys started inquiring about additional discrimination concerns in Allegheny’s tool, three sources told the AP. They spoke on the condition of anonymity, saying the Justice Department asked them not to discuss the confidential conversations. Two said they also feared professional retaliation.
IQ TESTS, PARENTING CLASS
With no answers on when they could get their daughter home, the Hackneys’ lawyer in October filed a federal civil rights complaint on their behalf that questioned how the screening tool was used in their case.
Over time, Allegheny’s tool has tracked if members of the family have diagnoses for schizophrenia or mood disorders. It’s also measured if parents or other children in the household have disabilities, by noting whether any family members received Supplemental Security Income, a federal benefit for people with disabilities. The county said that it factors in SSI payments in part because children with disabilities are more likely to be abused or neglected.
The county also said disabilities-aligned data can be “predictive of the outcomes” and it “should come as no surprise that parents with disabilities … may also have a need for additional supports and services.” In an emailed statement, the county added that elsewhere in the country, social workers also draw on data about mental health and other conditions that may affect a parent’s ability to safely care for a child.
The Hackneys have been ordered to take parenting classes and say they have been taxed by all of the child welfare system’s demands, including IQ tests and downtown court hearings.
People with disabilities are overrepresented in the child welfare system, yet there’s no evidence that they harm their children at higher rates, said Traci LaLiberte, a University of Minnesota expert on child welfare and disabilities.
Including data points related to disabilities in an algorithm is problematic because it perpetuates historic biases in the system and it focuses on people’s physiological traits rather than behavior that social workers are brought in to address, LaLiberte said.
The Los Angeles tool weighs if any children in the family have ever gotten special education services, have had prior developmental or mental health referrals or used drugs to treat mental health.
“This is not unique to caseworkers who use this tool; it is common for caseworkers to consider these factors when determining possible supports and services,” the developers said by email.
Before algorithms were in use, the child welfare system had long distrusted parents with disabilities. Into the 1970s, they were regularly sterilized and institutionalized, LaLiberte said. A landmark federal report in 2012 noted parents with psychiatric or intellectual disabilities lost custody of their children as much as 80 percent of the time.
Across the U.S., it’s extremely rare for any child welfare agencies to require disabilities training for social workers, LaLiberte’s research has found. The result: Parents with disabilities are often judged by a system that doesn’t understand how to assess their capacity as caregivers, she said.
The Hackneys experienced this firsthand. When a social worker asked Andrew Hackney how often he fed the baby, he answered literally: two times a day. The worker seemed appalled, he said, and scolded him, saying babies must eat more frequently. He struggled to explain that the girl’s mother, grandmother and aunt also took turns feeding her each day.
FOREVER FLAGGED
Officials in Allegheny County have said that building AI into their processes helps them “make decisions based on as much information as possible,” and noted that the algorithm merely harnesses data social workers can already access.
That can include decades-old records. The Pittsburgh-area tool has tracked whether parents were ever on public benefits or had a history with the criminal justice system — even if they were minors at the time, or if it never resulted in charges or convictions.
The AP found those design choices can stack the deck against people who grew up in poverty, hardening historical inequities that persist in the data, or against people with records in the juvenile or criminal justice systems, long after society has granted redemption. And critics say that algorithms can create a self-fulfilling prophecy by influencing which families are targeted in the first place.
“These predictors have the effect of casting permanent suspicion and offer no means of recourse for families marked by these indicators,” according to the analysis from researchers at the ACLU and the nonprofit Human Rights Data Analysis Group. “They are forever seen as riskier to their children.”
As child welfare algorithms become more common, parents who have experienced social workers’ scrutiny fear the models won’t let them escape their pasts, no matter how old or irrelevant their previous scrapes with the system may have been.
Charity Chandler-Cole, who serves on the Los Angeles County Commission for Children and Families, is one of them. She landed in foster care as a teen after being arrested for shoplifting underwear for her younger sister. Then as an adult, she said, social workers once showed up at her apartment after someone spuriously reported that a grand piano was thrown at her nephew who was living at her home — even though they didn’t own such an instrument.
The local algorithm could tag her for her prior experiences in foster care and juvenile probation, as well as the unfounded child abuse allegation, Chandler-Cole says. She wonders if AI could also properly assess that she was quickly cleared of any maltreatment concerns, or that her nonviolent offense as a teen was legally expunged.
“A lot of these reports lack common sense,” said Chandler-Cole, now the mother of four and CEO of an organization that works with the court system to help children in foster care. “You are automatically putting us in these spaces to be judged with these labels. It just perpetuates additional harm.”
Chandler-Cole’s fellow commissioner Wendy Garen, by contrast, argues “more is better” and that by drawing on all available data, risk scoring tools can help make the agency’s work more thorough and effective.
GLOBAL INFLUENCE
Even as their models have come under scrutiny for their accuracy and fairness, the developers have started new projects with child welfare agencies in Northampton County, Pennsylvania, and Arapahoe County, Colorado. The states of California and Pennsylvania, as well as New Zealand and Chile, have also asked them to do preliminary work.
And as word of their methods has spread in recent years, Vaithianathan has given lectures highlighting screening tools in Colombia and Australia. She also recently advised researchers in Denmark and officials in the United Arab Emirates on how to use technology to target child services.
“Rhema is one of the world leaders and her research can help to shape the debate in Denmark,” a Danish researcher said on LinkedIn last year, regarding Vaithianathan’s advisory role related to a local child welfare tool that was being piloted.
Last year, the U.S. Department of Health and Human Services funded a national study, co-authored by Vaithianathan and Putnam-Hornstein, that concluded that their overall approach in Allegheny could be a model for other places.
HHS’ Administration for Children and Families spokeswoman Debra Johnson declined to say whether the Justice Department’s probe would influence her agency’s future support for an AI-driven approach to child welfare.
Especially as budgets tighten, cash-strapped agencies are desperate to find more efficient ways for social workers to focus on children who truly need protection. At a 2021 panel, Putnam-Hornstein acknowledged that “the overall screen-in rate remained totally flat” in Allegheny since their tool had been implemented.
Meanwhile, foster care and the separation of families can have lifelong developmental consequences for the child.
A 2012 HHS study found 95% of babies who are reported to child welfare agencies go through more than one caregiver and household change during their time in foster care, instability that researchers noted can itself be a form of trauma.
The Hackneys’ daughter already has been placed in two foster homes and has now spent more than half of her short life away from her parents as they try to convince social workers they are worthy.
Meanwhile, they say they're running out of money in the fight for their daughter. With barely enough left for food from Andrew Hackney’s wages at a local grocery store, he had to shut off his monthly cell phone service. They’re struggling to pay for the legal fees and gas money needed to attend appointments required of them.
In February, their daughter was diagnosed with a disorder that can disrupt her sense of taste, according to Andrew Hackney’s lawyer, Robin Frank, who added that the girl has continued to struggle to eat, even in foster care.
All they have for now are twice-weekly visits that last a few hours before she’s taken away again. Lauren Hackney’s voice breaks as she worries her daughter may be adopted and soon forget her own family. They say they yearn to do what many parents take for granted — put their child to sleep at night in her own bed.
“I really want to get my kid back. I miss her, and especially holding her. And of course, I miss that little giggly laugh,” Andrew Hackney said, as his daughter sprang toward him with excitement during a recent visit. “It hurts a lot. You have no idea how bad.”
Top 5 AI Chatbot Platforms and Trends in 2023
Artificial Intelligence isn’t anything new. John McCarthy first proposed the idea of AI, a unique proposition that machines would one day think and interact like a human. This highly conceptualized proposition of AI was a way forward to understanding the limitations of machines and the ability of humans to pass on sentience.
While we’re still far off from sentience, AI has, however, started to transform our lives. From conceptual AI humanoid robots like Sophia to IoT and even chatbots, the application and benefits of AI are visible across the board.
Today we’ll talk about the most accessible form of AI for the general public, chatbots. It's fast, accurate, simple, and in most cases, free. Here’s our take on 5 of the most trending AI chatbots.
Read More: Rakuten Viber launches new chatbot, AI Chat and Create
What is an AI Chatbot?
Just like AI, the concept of an AI chatbot also isn’t something new. The story of AI chatbots started with ELIZA back in 1994. Joseph Weizenbaum of MIT first introduced a chatting platform where the computer was able to perform basic interaction with the user. It was based on the concept of matching pre-programmed phrases with the user input to generate a somewhat meaningful response.
But the first proper use of AI Markup Language was seen a year later with ALICE, an interactive chatbot created by Richard Wallace in 1995. From then on, there has been no looking back. We had Jabberwacky by Rollo Wacky and Mitsuku by Steve Worswick.
Big companies like Microsoft also jumped into the game with Cortana on their now-defunct Windows Phone. But all of these were limited to a handful of functions. In a sense, they were intelligent with highly limited abilities. But that all changed with OpenAI.
Read More: 7 Top AI Writing Tools, Software to Generate Human-Like Text
Best AI Chatbots in 2023
There are probably thousands of chatbots out there catering to different niches. There are specialized chatbots for businesses, industries, and even events. But most chatbots are based on certain NLP tech. We will focus on more primary chatbots that are multifarious in nature or cater to a broad niche.
ChatGPT
If you haven’t heard the name ChatGPT in the last couple of months, then you’re living under the rocks. This universal chatbot gained over 100 million active users in a matter of two months to record the highest number of active monthly users beating any social media platform out there.
ChatGPT is based on the Generative Pre-trained Transformer or GPT 3 module. This natural language processor amalgamates AI and ML to constantly feed information and training to the platform. The result is the most human-like interaction from a chat platform to date.
Read More: ChatGPT by Open AI: All you need to know
OpenAI has incorporated 570 GB of internet data along with over 300 billion words into the ML module. With ChatGPT, the interaction is not limited to small conversations. You can create a full-on study routine, fitness regime, and even marketing campaigns from the chatbot. You can even ask it to write a poem or even do entry-level programming.
Surprised? Wait till you find out that ChatGPT has already passed the medical licensing exam in the USA, the regional bar exam, the Google entry-level software engineer interview as well as the AP English Essay test.
Pros: · Most realistic output to date
· STEM integration
· Highly interactive.
Read More: Ameca: World’s Most Realistic Advanced Humanoid Robot AI Platform
Cons:· The platform isn’t always available due to the high user base.
· Data is available up until 2021 only.
Google's AI Chatbot Bard: All You Need to Know
An AI chatbot is a computer program designed to simulate a conversation with a human. It uses natural language processing and artificial intelligence to understand user input and respond in a meaningful way. AI chatbots can be used for customer service, providing personalized recommendations, or other tasks.
Recently an AI chatbot named ChatGPT has taken the world by storm. It is more than a usual chatbot with a huge collection of data and portrays it as a threat to Google. To fight this, Google has announced bringing out their own chatbot named Bard AI. Let's find out the details of Google's AI Chatbot Bard.
What is AI Chatbot Bard?
At present, there is limited information on Google's AI-powered tool, which can only be accessed by those selected as "trusted testers." However, following the company's demonstration of the product in Paris on February 8, we can now provide answers to some of the most frequent questions posed about Bard AI. A public launch of the tool is expected in the near future.
Read More: ChatGPT by Open AI: All you need to know
Google Bard is essentially a chatbot that functions using AI, similar to ChatGPT. To enable its conversations, Bard utilizes the Language Model for Dialogue Applications (LaMDA) model. Initially, a less complex version of this language model will be used during the test phase.
Bard strives to bring together the depth of the world's knowledge with intelligence, creativity, and power using Google’s expansive language models. It utilizes data from the Internet to give up-to-date, top-notch results.
Bard can be a catalyst for creativity and a platform for inquiry, assisting you in explaining fresh discoveries from NASA's James Webb Telescope to a nine-year-old, or discover more regarding the best strikers in soccer currently and afterward get drills to enhance your abilities.
Read More: High Paid Jobs that Will Never be Replaced by AI
Rakuten Viber launches new chatbot, AI Chat and Create
Rakuten Viber, a global leader in private and secure messaging and voice-based communication, has launched a new chatbot, AI Chat and Create, providing users easy access to AI text and image generators.
By integrating advanced models of generative AI, such as DALL-E and Davinci, the Viber app now allows users to ask the chatbot any question or test the chatbot's creativity by designing unique images.
With just a few taps, users can transition between chatting with friends to using the bot, sharing their art, or answer a question effortlessly.
Read More: ChatGPT by Open AI: All you need to know
"Excitement about generative AI technology currently has much of the tech industry's attention and every day more and more people are exposed to the wonders that can be achieved with this technology. However, access to some of these tools is not very simple for everyone and now, we are offering the easiest way to try out various AI services in the comfort of a Viber chat, inside the app, without the need to register to a special service or further hassle and completely free," Ofir Eyal, CEO of Rakuten Viber, said.
"We provide access to these industry-leading AI tools directly on the app and users can quickly share their creations or answers. Right now, the chatbot offers two options – one for images and one for text - and we're looking continuously to expand the offering in the near future," Ofir added.
"The chatbot already has over 70,000 subscribers and over 250,000 viewers and is constantly growing. If people don't know where to start, they can easily click "Inspire Me" for the chatbot to share an example of its capabilities," Viber said.s
Read more: Viber cuts business ties with Facebook
The AI Chat and Create chatbot can be found by searching in the chat function of the Viber app or on its explore page.
Use AI to develop entrepreneurs: Speakers
The Entrepreneur Economist Club of Dhaka School of Economics organized a virtual seminar on the importance of big data and machine learning in entrepreneurship analysis.
Speakers from different countries connected the seminar through online, while the students and faculties of entrepreneurship economics joined the program as hosts, held on Tuesday.
The speakers emphasized increasing practice on machine learning and data analytics, to face the challenges of entrepreneurship in the new era.
There is a possibility of major changes in the economic activities of the country. So any skills in machine learning and data analytics will put an employee ahead, they said.
Read More;: Google's AI Chatbot Bard: All You Need to Know
Prof Parul Khanna, Vice Principal, IMT, Faridabad, India was the chief guest in the seminar while Economist and Coordinator of entrepreneurship economics Professor Dr. Muhammad Mahbub Ali chaired the session.
Prof.Dr.Rinku Sharma Dixit, New Delhi School of Management, India presents a keynote paper on ‘using big data and artificial intelligence to accelerate entrepreneurial development
Dr. Sudipta Bhattacharya, Dr. Dipika Kundal, Dr. Kunal Sheel, Dr. Pranjal Kumar Pukhan, Assistant Professors Rehena Parveen, and Dr. Sara Tasnim, among others, spoke at the function.
Read More: ChatGPT maker releases tool to help teachers detect if AI wrote homework
Google hopes ‘Bard’ will outsmart ChatGPT, Microsoft in AI
Google is girding for a battle of wits in the field of artificial intelligence with “Bard,” a conversational service apparently aimed at countering the popularity of the ChatGPT tool backed by Microsoft.
Bard initially will be available exclusively to a group of “trusted testers” before being widely released later this year, according to a Monday blog post from Google CEO Sundar Pichai.
Google’s chatbot is supposed to be able to explain complex subjects such as outer space discoveries in terms simple enough for a child to understand. It also claims the service will also perform other more mundane tasks, such as providing tips for planning a party, or lunch ideas based on what food is left in a refrigerator. Pichai didn’t say in his post whether Bard will be able to write prose in the vein of William Shakespeare, the playwright who apparently inspired the service’s name.
Read More: Google's AI Chatbot Bard: All You Need to Know
“Bard can be an outlet for creativity, and a launchpad for curiosity,” Pichai wrote
Google announced Bard’s existence less than two weeks after Microsoft disclosed it’s pouring billions of dollars into OpenAI, the San Francisco-based maker of ChatGPT and other tools that can write readable text and generate new images.
Microsoft’s decision to up the ante on a $1 billion investment that it previously made in OpenAI in 2019 intensified the pressure on Google to demonstrate that it will be able to keep pace in a field of technology that many analysts believe will be as transformational as personal computers, the internet and smartphones have been in various stages over the past 40 years.
Read More: ChatGPT maker releases tool to help teachers detect if AI wrote homework
In a report last week, CNBC said a team of Google engineers working on artificial intelligence technology “has been asked to prioritize working on a response to ChatGPT.” Bard had been a service being developed under a project called “Atlas,” as part of Google’s “code red” effort to counter the success of ChatGPT, which has attracted tens of millions of users since its general release late last year, while also raising concerns in schools about its ability to write entire essays for students.
Pichai has been emphasizing the importance of artificial intelligence for the past six years, with one of the most visible byproducts materializing in 2021 as part of a system called “Language Model for Dialogue Applications,” or LaMDA, which will be used to power Bard.
Google also plans to begin incorporating LaMDA and other artificial intelligence advancements into its dominant search engine to provide more helpful answers to the increasingly complicated questions being posed by its billion of users. Without providing a specific timeline, Pichai indicated the artificial intelligence tools will be deployed in Google’s search in the near future.
Read More: ChatGPT by Open AI: All you need to know
In another sign of Google’s deepening commitment to the field, Google announced last week that it is investing in and partnering with Anthropic, an AI startup led by some former leaders at OpenAI. Anthropic has also built its own AI chatbot named Claude and has a mission centered on AI safety.
ChatGPT maker releases tool to help teachers detect if AI wrote homework
The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.
The new AI Text Classifier launched Tuesday (January 31, 2023) by OpenAI follows a weeks-long discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.
OpenAI cautions that its new tool – like others already available – is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of OpenAI’s alignment team tasked to make its systems safer.
Read More: What is ChatGPT, why are schools blocking it?
“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.
Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched Nov. 30 as a free application on OpenAI’s website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.
By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.
Read More: CES 2023: Walton's smart AI products get huge response
The Seattle Public Schools district initially blocked ChatGPT on all school devices in December but then opened access to educators who want to use it as a teaching tool, said Tim Robinson, the district spokesman.
“We can’t afford to ignore it,” Robinson said.
The district is also discussing possibly expanding the use of ChatGPT into classrooms to let teachers use it to train students to be better critical thinkers and to let students use the application as a “personal tutor” or to help generate new ideas when working on an assignment, Robinson said.
Read More: AI & Future of Jobs: Will Artificial Intelligence or Robots Take Your Job?
School districts around the country say they are seeing the conversation around ChatGPT evolve quickly.
“The initial reaction was ‘OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,’” said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. Now there is a growing realization that “this is the future” and blocking it is not the solution, he said.
“I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power,” said Page, who thinks districts like his own will eventually unblock ChatGPT, especially once the company’s detection service is in place.
Read More: How Can Artificial Intelligence Improve Healthcare?
OpenAI emphasized the limitations of its detection tool in a blog post Tuesday, but said that in addition to deterring plagiarism, it could help to detect automated disinformation campaigns and other misuse of AI to mimic humans.
The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text -- a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” --- and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated.
But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result.
Read More: Ai and Future of Content Writing: Will Artificial Intelligence replace writers?
“We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Leike said. “There’s really not much we could say at this point about how the classifier actually works.”
Higher education institutions around the world also have begun debating responsible use of AI technology. Sciences Po, one of France’s most prestigious universities, prohibited its use last week and warned that anyone found surreptitiously using ChatGPT and other AI tools to produce written or oral work could be banned from Sciences Po and other institutions.
In response to the backlash, OpenAI said it has been working for several weeks to craft new guidelines to help educators.
Read More: Ameca: World’s Most Realistic Advanced Humanoid Robot AI Platform
“Like many other technologies, it may be that one district decides that it’s inappropriate for use in their classrooms,” said OpenAI policy researcher Lama Ahmad. “We don’t really push them one way or another. We just want to give them the information that they need to be able to make the right decisions for them.”
It’s an unusually public role for the research-oriented San Francisco startup, now backed by billions of dollars in investment from its partner Microsoft and facing growing interest from the public and governments.
France’s digital economy minister Jean-Noël Barrot recently met in California with OpenAI executives, including CEO Sam Altman, and a week later told an audience at the World Economic Forum in Davos, Switzerland that he was optimistic about the technology. But the government minister — a former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris — said there are also difficult ethical questions that will need to be addressed.
Read More: ChatGPT by Open AI: All you need to know
“So if you’re in the law faculty, there is room for concern because obviously ChatGPT, among other tools, will be able to deliver exams that are relatively impressive,” he said. “If you are in the economics faculty, then you’re fine because ChatGPT will have a hard time finding or delivering something that is expected when you are in a graduate-level economics faculty.”
He said it will be increasingly important for users to understand the basics of how these systems work so they know what biases might exist.
First-ever AI-generated mini-comic book in Bangladesh launched
For the first time in Bangladesh, a mini-comic book created with artificial intelligence (AI) was published on Sunday (January 15, 2023) night.
Science Bee, one of the largest science-based education platforms for youths across the country, published the AI-generated mini-comic titled ‘Manobjatir Grohon’ on its official Facebook page at 8 pm.
Regarding the publication, Science Bee said in its post: "Imagine this, an artificial intelligence program is writing stories for you, and another artificial intelligence program is illustrating the stories. Could we have imagined this a few years ago?
Read More: ChatGPT by Open AI: All you need to know
The script, title, character development, and illustration were all handled entirely by artificial intelligence in this science fiction comic book, produced for the first time in Bangladesh and perhaps in the Bengali language as well.”
The 17-page mini-comic book narrates the journey of humankind, envisioning the end of the human race in the world and the aftermath.
“By using ChatGpt and Midjourney Ai, we created the comics. Artificial intelligence was used to produce the storyline, characters, narration, and each page image in the comic. As per our human contribution, we worked together on the translation and a little bit of Photoshop,” according to Science Bee.
Read: Universal to open theme park in Texas for young kids
Co-created with human assistance by Annoy Debnath, the free mini-comic book has already created a buzz on Facebook and is receiving positive feedback from its readers.
Founded by Mobin Sikder, a student of Chemistry at Jahangirnagar University, Science Bee was founded in 2018. The purpose of this platform is to transform diversity and inclusivity of science and technology, to reach the under-served community, and to increase the number of those actively engaged and involved in science and technology.
The mini-comic is available at https://www.sciencebee.com.bd/ebook-3/.
Read More: 7 Top AI Writing Tools, Software to Generate Human-Like Text
ChatGPT by Open AI: All you need to know
Artificial intelligence has been advancing day by day. Programmers and researchers are innovating new tools, software, and programs to help humans in diverse sectors. Recently, the advent of ChatGPT has created a buzz among tech specialists as well as general people. Let’s find out what is ChatGPT; how to use it; its pros and cons; and how to apply ChatGPT for career development.
What is ChatGPT
An artificial intelligence research organization called Open AI has invented ChatGPT. Previously, Open AI created GPT-3 and GPT-3.5 which are able to create human-like texts. ChatGPT is a GPT-3 based AI tool that can process natural language. This AI tool can help to create human-like conversations between a user and an AI chatbot. In simple words, ChatGPT is like a digital helping hand for the user to solve different problems.
How Does ChatGPT Work?
ChatGPT is a large language model, based on GPT3 and GPT 3.5. Utilizing machine learning algorithms and human interventions, the ChatGPT can formulate human-like texts to answer the queries or solve problems of the users. ChatGPT can improve its efficiency through Reinforcement Learning from Human Feedback (RLHF).
Read More: AI & Future of Jobs: Will Artificial Intelligence or Robots Take Your Job?
How to Use ChatGPT
To use ChatGPT, developers must first sign up for an OpenAI API key, allowing them to access the model and use it for their own applications.
The process of installation and setup of ChatGPT AI Bot:
-First, create an account on OpenAI’s official website -Generate a new API key-Access the ChatGPT using this API key-Install OpenAI's relevant package to access the ChatGPT model. For example, to use the Python language for coding, the user needs to install the OpenAI Python package for accessing the Python code’s ChatGPT model. After installation, the ChatGPT model can respond to natural language queries.-ChatGPT’s language model is able to produce text and code to answer queries in sensible ways.
Advantages and Disadvantages of Chat GPT
Pros:
-ChatGPT is efficient in programming, coding, and written languages. -It can write mathematical proofs and solve coding problems.-This AI-based tool makes it easy for users to have conversations with AI in a natural way.-It is free of cost and simple to use.-It can serve as an AI assistant to create content or develop software.
Read More: How Can Artificial Intelligence Improve Healthcare?
Cons:
-Unlike Search engines like Google, ChatGPT does not display articles, news, or other credible sources while answering the queries of the users. In simple words, the ChatGPT AI chatbot does not provide any citation or source of information. -This AI tool can devalue the effort and hard work of talented professionals, programmers, writers, or specialists. When any individual provides original work, utilizing that knowledge any user of ChatGPT can write codes, articles, or solve math problems without much hard work. -While responding to queries for elaborate content, the ChatGPT overuses certain phrases and creates wordy verbose content. -This AI chatbot tool provides the same responses to all users for the same queries. When two users ask ChatGPT to write essays on the same topic, they will get the same content. Therefore, personalization is not available in ChatGPT.
The Different Applications of ChatGPT in Professional Works
Solve Coding Problems and Developing Apps
ChatGPT can understand and write code efficiently. It is a primary advance over the ancestor language models. The user can instruct this AI chatbot to address diverse coding problems and assist in the debugging process.
Besides programming solutions, the tool can provide examples of codes to develop apps.
Read More: Ai and Future of Content Writing: Will Artificial Intelligence replace writers?
Writing Contents and Blog Posts
ChatGPT can be utilized to produce any kind of text, including essays, and blog posts. This AI chatbot can enhance the style and quality of writing. The users can also create catchy titles for blog posts and articles.
Searching Information
Unlike search engines, ChatGPT can find straightforward, and uncluttered responses to diverse queries. Thus, AI chatbots can explain complex issues in an easy way.
Answer Customer Questions
Diverse Businesses can utilize AI-based ChatGPT tools to provide helpful responses to customer queries and improve customer experience. ChatGPT can understand and assist companies to address customers’ needs effectively.
Alternative to Google Search
ChatGPT has the ability to serve as an alternative to search engines like Google search. However, it is a matter of concern that whether or not this AI chatbot can be used as an ideal substitute for Google depends on the user's precise needs and preferences. For instance, ChatGPT is helpful for users who prefer a conversational search experience.
Read More: Top 10 Most Exciting Innovations of 2022 in Technology