Doing Good with AI
A panel dived into the topic of AI, with experts introducing their tools and discussing the positive effects and sides of AI and the potential trust that can be built in terms of AI.
On Tuesday, the showcase track panel Doing Good With AI was moderated by Maida Ćulahović from CA Why Not. Maida opened the panel by saying that artificial intelligence and doing good is not necessarily the first association that comes to mind. There have been many concerns and doubts when it comes to AI as well as many ethical issues and privacy concerns. She pointed out that there are also many positive effects of AI in fields such as healthcare or research.
The focus of the panel was to discuss positive effects of AI and the ways it can be utilized to support and underpin the public mission.
Maida then introduced the first guest of the panel, Erkan Saka, professor of media and journalism studies from Istanbul Bilgi University. Erkan Saka recently directed a project on AI literacy, which is of major importance today.
Saka said that they have had some meetings with the leading NGOs now working in the earthquake-stricken region in southeast Turkey. He added that, before that, they had some meetings with fact-checkers and some journalists as well.
Saka said that the approach to social media used to be euphoric, unlike for AI. He stated that there was much more anxiety around AI rather than euphoria or optimism. He said that there was this belief of conciseness and social concerns and he did not know if it would lead to some positive developments, however he is hopeful. He wondered how NGOs or civic initiatives could use AI.
He then pointed out that sometimes global discussions may not have local resonance. He emphasized data bias, which he finds critical and that it cannot be ignored. He said that in their conversations there was not enough data to begin with. Their main focus, in the future studies, will be to collect enough data and build their data sets. He said that lack of data was a major issue and that some data was not even accessible or they simply do not have data. He pointed out that data is very much limited, in the classroom also.
Another issue that Erkan Saka pointed out was lack of collaboration with technologists and that was the new issue of AI development. He said that there were many companies that were qualified and that there were many civic initiatives. However, they do not know each other. So, Erkan Saka said that their focus would be to learn how to collaborate.
“And in the meantime, there is a control over the AI, you know, AI bots, basically. But there are also some free places. I mean, basically, I mean, open access. And there are many models that can be trained. Even Meta, Facebook, I think, had a very good strategy in opening up its data sets. So that now technologists can work with NGOs using at least these open access versions. Without corporatization. Without limitations. To create more localized models. For more special, specific purposes. And we also saw that despite the global discussions over anxieties, all civic initiatives were really into using NGO AI. But they don’t know how to use it, basically. I mean, they need specialized apps or models.”
Erkan Saka also pointed out the issue of budget. For personal usage, for small scale usage, budgets are fine. But if the goal is to create something more specific, more than unique sources, both budget, but also technologies are needed.
Maida, then, presented the second speaker in the panel, Hayk Asriyants, who is a social entrepreneur and founder of the Startup Büro in Georgia. He developed an interesting AI tool that will help journalists and enhance accountability and trust in public officials. Ariyants then presented his AI tool which will be available in September 2024, although it is now ready.
Hayk Asriyant’s tool is able to analyze pictures of politicians and public person’s items that they are wearing and can recognize where they are from and what is their price.
He created this tool to help investigative journalists and fact checkers to analyze items. The reason for that, when he conducted some research in a few countries, he realized that many public figures and politicians possess items that are much more expensive compared to what their salaries are. That brand is called Integrity Lens.
Because of the possibility that this tool might be abused by individuals to analyze pictures of their family members, dates, this tool is created specifically for journalists.
Author: Irma Halilović