Rise of privacy in AI-based computation – By Aditya Abeysinghe

Rise of privacy in AI-based computation – By Aditya Abeysinghe

Rise of privacy in AI-based computation – By Aditya Abeysinghe


Is tracked data changing consumer behavior? By Aditya AbeysingheAI (Artificial Intelligence) is used in many digital apps regardless of computational and storage capacities of devices. AI is used in devices from small-scale edge devices to large-scale server systems. Growing usages of AI in have caused benefits while also causing several issues. Breach of privacy is one issue in AI that is commonly reported.

Ethics when using AI

The lifecycle of deploying an AI model composes of gathering data for training, selecting features to build a model, training etc. Ethics in AI are guidelines that need to be followed in each of these stages to ensure AI models and systems are transparent to users. Ethics in AI ensures that there is a set of rules that each member who involves in deploying an AI model should comply with and that there are no hidden benefits that members gain other than goals of the whole team.

Ethics in AI also ensures that users outside the team which deployed models are notified of any issues before they use a system. Mostly, businesses that deploy models and provide a system to end users need to ensure that there are no issues in the system such that users are not affected. Also, security breaches of these AI models need to be monitored and resolved by businesses and businesses cannot use personal data of users without user content.

Breaches of privacy in AI

Breach of privacy in user data have been growing with wide scale use of internet-based services. Organizations monitoring privacy and data related threats have used laws to ensure businesses do not use data from users by breaching their security. As an example, European Union has declared several regulations on privacy of user data such as the GDPR (General Data Protection Regulation). Businesses that have breached user privacy under such laws have been asked to delete affected data and pay users for any threats from such breaches.

How could privacy in AI be enhanced?

Many AI-based services use techniques to enhance privacy. Encryption during sharing content is common in many apps. Though encrypted text can be converted back by middle parties, the process often includes a large effort and time. Many businesses use encryption in text, voice, and file sharing media of their products. Therefore, these data security methods ensure that privacy of sharing media over networks can be enhanced.

Today, personal information of users is collected using various devices. Data are collected by various means from wearable IoT (Internet of Things) to websites. Collected data from these products are used by businesses to improve customer satisfaction by using AI models to understand behavior of users. However, data collected can be used for hidden purposes in methods that affect user privacy. Therefore, to limit misuse of data users could limit information provided to these products.

Image Courtesy: https://medium.com/

Comments are closed.