Major areas for AI safety
As AI adoption increases we need a multi pronged approach to ensure that AI usage is safe. This approach should encompass education, intervention, and accessibility.
First, widespread usage of AI leads to a new way of communication and social interaction, where actual user generated content is intermingled with AI generated content. We need to ensure that the social strata which will be most affected by these systems are currently the least educated about the technology. Model cards, data cards serve a good step in the direction of educating people about these technologies, but we need to ensure people understand interacting with these systems like they interact with a Knife in their kitchen, they know it can cause harm but also know the right way to use it to increase their productivity.
Second, people and social groups and their representatives should be able to intervene with these AI systems to ensure they are locally adoptable and are not forcing a certain outsider belief into their community. When AI fails their should be a way to exit with a no AI option or transfer control to the user. This would require building better AI interoperable UX.
Third, AI should be accessible to as wide a community to ensure it doesn't perpetuate the rich gets richer phenomenon and leads to further socio economic divide. However, accessibility should not just be limited to open source models, but the ability of people to use AI as a tool as opposed to a magic wand and ensure that there are ways communities can invest reasonably in making AI reach their members. Something like this happened with the Maker movement which made 3d printing very accessible and ensured community participation.
Working on Twitter's Image cropping bias work and NER bias work allowed me to work on these problems first hand and see that the solutions to many tech problems often lies not within the span of a keyboard but across the span of people who use it, and the latter is not normally distributed.