Navigating the ethical minefield of the AI landscape- Intel’s Santhosh Viswanathan on what India should do


The remarkable strides in artificial intelligence (AI) have opened up unprecedented possibilities, impacting virtually every facet of our lives. What was once a realm reserved for specialised experts has now become accessible to individuals worldwide, who are harnessing AI’s capabilities at scale. This accessibility is revolutionising how we work, learn, and play. 

While democratising AI heralds the limitless potential for innovation, it also introduces considerable risks. Heightened concerns over misuse, safety, bias, and misinformation underscore the importance of embracing responsible AI practices now more than ever.

An ethical conundrum

Derived from the Greek word ethos which can mean custom, habit, character or disposition, ethics is a system of moral principles. The ethics of AI refer to both the behaviour of humans that build and use AI systems as well as the behaviour of these systems.

For a while now, there have been conversations – academic, business, and regulatory – about the need for responsible AI practices to enable ethical and equitable AI. All of us stakeholders – from chipmakers to device manufacturers to software developers – should work together to design AI capabilities that lower risks and mitigate potentially harmful uses of AI. 

Even Sam Altman, OpenAI’s chief executive, has remarked that while AI will be “the greatest technology humanity has yet developed”, he was “a little bit scared” of its potential. 

Addressing these challenges

Responsible development must form the bedrock of innovation throughout the AI life cycle to ensure AI is built, deployed and used in a safe, sustainable and ethical way. A few years ago, the European Commission published Ethics Guidelines for Trustworthy AI, laying out essential requirements for developing ethical and trustworthy AI. According to the guidelines, trustworthy AI should be lawful, ethical, and robust. 

While embracing transparency and accountability is one of the cornerstones of ethical AI principles, data integrity is also paramount since data is the foundation for all machine learning algorithms and Large Language Models (LLMs). Apart from safeguarding data privacy, there is also a need to obtain explicit consent for data usage with responsible sourcing and processing of that data. Additionally, since our inherent biases and prejudices are exhibited in our data, the AI models trained on these datasets can potentially amplify and scale these human biases. We must, therefore, proactively mitigate bias in the data, while ensuring diversity and inclusivity in the development of AI systems.

Then there’s the concern around digitally manipulated synthetic media called deepfakes. At the recent Munich Security Conference, some of the world’s biggest technology companies came together pledging to fight deceptive AI-generated content. The accord comes in the context of escalating concerns over the impact of misinformation driven by deepfake images, videos, and audio on high-profile elections due to take place this year in the US, UK, and India.

More such efforts can be leveraged by social media platforms and media organisations to prevent the amplification of harmful deepfake videos. Intel, for example, has introduced a real-time deepfake detection platform – FakeCatcher – that can detect fake videos with a 96% accuracy rate and returns results in milliseconds. 

Lastly, while science-fiction fans indulge in conversations around technological singularity, there is a definite need to identify risks and define controls to address the lack of human agency and hence lack of clear accountability to avoid any unintended consequences of AI gone rogue.

Shaping ethical AI guidelines

Leading tech companies are increasingly defining ethical AI guidelines in an effort to create principles of trust and transparency while achieving their desired business goals. This proactive approach is mirrored by governments around the world. Last year, US President Joe Biden signed an executive order on AI, outlining “the most sweeping actions ever taken to protect Americans from the potential risks of AI.” And now, the European Union has approved the AI Act which is the first regulation framework in the world focusing on governing AI. The rules will ban certain AI technologies based on their potential risks and level of impact, introduce new transparency rules, and require risk assessments for high-risk AI systems.

Like its global counterparts, the Indian government acknowledges AI’s profound societal impact, recognising both its potential benefits and the risks of bias and privacy violations. In recent years, India has implemented initiatives and guidelines to ensure responsible AI development and deployment. In March, MeitY revised its earlier advisory to major social media companies, changing a provision that mandated intermediaries and platforms to get government permission before deploying “under-tested” or “unreliable” AI models and tools in the country. 

The new advisory retains MeitY’s emphasis on ensuring that all deepfakes and misinformation are easily identifiable, advising intermediaries to either label, or embed the content with “unique metadata or identifier”. 

To conclude, in a landscape where innovation is outpacing regulation, the significance of upholding responsible AI principles cannot be overstated. The potential for societal harm looms large when AI development is separated from ethical frameworks. Therefore, we must ensure that innovation is tempered with responsibility, safeguarding against the pitfalls of misuse, bias, and misinformation. Only through collective vigilance and unwavering dedication to ethical practice can we harness the true potential of AI for the betterment of humanity.

– Written by Santhosh Viswanathan, VP and MD-India region, Intel. 

 

Leave a Reply

Your email address will not be published. Required fields are marked *