AI a tool to harness great potential and fear of deprecating privacy: A need for rigorous regulations.
- aarattrika chanda
- Nov 21, 2024
- 2 min read
By Harsh Sinha
Introduction.
From generating solutions to the users, AI is learning to step into the world of ebb and flow. This is the era of cyber attacks which , unlike the mediaeval period of warriors riding chariots with swords. The only difference between today's cyberwarfare technique and mediaeval period warfare is that AI is the new sword of the warrior, who could be the government operating the mighty sword with a set of rules to righteously direct the AI or any person corrupting the powers of AI through deep fakes and data breach. AI is flowing expeditiously; it has been programmed to automatically learn by itself to cope with rampant surges of various potential threats; it obliges the command of the beholder even if the beholder is a wheeler-dealer. Deep fakes significantly damage a person's life. A viral video of Obama's BuzzFeed deep fake in 2018 planted a seed of horror and feign in people's minds.
Additionally, stealing data for personal gains has affected many individuals, government and private organisations. AI has proven to be a great asset in cybersecurity by exploring its self-learning capability and promptness, and it can train itself during challenges and supersede the illicit or unprecedented activity performed by any remote user/organisation. The new era of automation or machine learning has begun, but the roar of AI in the digital world has invoked cyber attacks like deep fakes and data breaches. India alone has suffered from more than 21 percent of cyber attacks, which stands above the average rate compared to the world. AI should be under the purview of stringent laws to safeguard the alliance of people with harmony.
The US, China, Russia, Israel and the UK are Countries that have embedded layers of cybersecurity laws and are considered to be far more developed than any other in the era of cyberwarfare. India has been continuously learning to fill the gaps of cyber security threats.
A rigid International law will provide a proper structure in reshaping the framework for AI and cybersecurity, additionally rectifying the global challenges posed by AI and cybersecurity.
AI in the field of cybersecurity
The growth of AI has provided tools for users, and it eases the tasks for the user by providing multiple powers which possess an advantage in cybersecurity.
Power to scrutinize: AI has an eagle eye on every subject fed to the algorithm; it learns to scrutinize and detect the anomalies in the device through self-learning capabilities.
Predictive analysis: The term defines the predicting power of AI by analyzing the data input in the server and then recalculating it for unprecedented potential threats, giving sharpness to detect any interference with the user's system.
Intel to threat: The automation gives the benefit of calculating the threat from a distance and developing the defence mechanism to fortify the system. A threat marrying the system will be isolated, and malicious incursion into the system can also be eliminated swiftly and precisely.
Vanguard to advanced threats: Cybersecurity has continuously been threatened by attacks like DDOS, phishing, and brute force; the AI has progressively earned a position of vesting an automatic shield with evolving powers.
Cybersecurity aims to fortify the kingdom of users'data from being sold to third parties or for personal gains. There has been a dynamic incursion of harmful, deep fake content to defame or manipulate people. However, It is not just surrounded by its positive values. However, it contains dark parts that play a significant role in seeding horror in people's minds. In 2019, a research report was published by Forrester, certifying that the attack rate will have an increased number of attacks. It is possible that humans will not be able to detect it.
1. How does Deepfakes stand against cybersecurity?.
Generating an image or audio of someone is now in the hands of people through generative AI. Generative AI can create new images, audio and video by feeding the characteristics of the data into the AI algorithm; it will later produce data similar to the characteristics fed into the generative adversarial network (a method to generate deep fakes). It has self-training capabilities to produce a hyper-realistic digital falsification. The use of deep fakes has accelerated since 2017. The tremendous generative power to create hyper-realistic images and videos has been in speculation due to the potential risk of:
Spreading disinformation and hoaxes creates social discord among communities and is prone to constitute bias and manipulate the election in a nation or state.
The first case of deep fakes for malicious use was discovered in pornographic content, which gave rise to using celebrity faces in the video or taking revenge on a particular victim. 96% of the deep fakes have been surmounted in porno content. Deep fakes can harm any person's life and destroy their integrity, especially women.
2. Data breach a massive rat hole.
Most of our critical information has been captured within the colonies of govt or private websites, and cyber-attacks like data breach is one of the biggest threats which can be used to hold critical information of any organization/individual hostage to gain personal satisfaction or monetary benefit illegally. A recent attack that was documented discussed the AI-assisted cyber-attack, which used an AI-enabled botnet which performed a DDoS attack on servers of TaskRabbit, leading to the stealing of 3.75 million of private and financial data. The AI technology holds a big crunch of data and anything entered in their server will be stored in their library which includes sensitive information which is no longer in control of the user. AI holds the strength to shield the data from being used spitefully, as it can effectively tackle cyber-attacks.
AI-assisted cyberattacks in the purview of law.
The topic has already focused on the strengths ofAI to detect and prevent anomalies from being harmful to society. Deep fakes and Data breaches are two of the most harmful AI-assisted attacks in the automation and machine learning era. AI is taking on a magnanimous role in the extensive cyber-warfare by fortifying the data via self-learning and quick responses. AI has a vast impact on any individual or organizations privacy and it needs to be regulated via transnational laws. The EU for instance has specifically mentioned RIght to be Forgotten, Right to Access user’s
Right to privacy is a basic fundamental right of any individual which must be administered by the government.
The Government with the assistance of media and private entities could collaborate with the platforms sharing individual’s characteristics in the form of sound or visuals and scrutinise the
data before uploading into the public domain for the safety of people. An establishment of a discrete organisation will help to detect any deep fakes through forensic detection techniques. Recently, The US has improved the chances of detecting the deep fake content by constituting a Defence Advanced Research Agencies, centred over scrutinising the deep fake contents. Countries should also adopt the use of blockchain technology, currently viewed as a decentralised system. Blockchain could also be a power up to preserve the integrity of people by verifying the origin and create an unchangeable record of the deep fakes created in AI-assisted platforms.
Conclusion
AI has harnessed effective execution of tasks provided by the users but there are difficulties present which are covered under the umbrella of ‘cyber-attack’. There are laws strictly for moderating the power of AI to prevent any greater evil.
Cyber Laws are indeed a task to be formed and applied in each country as a bastion to protect the citizens from cyber-attacks which are deleterious like deep fakes and data breach.
Therefore, a strict code of conduct must be conformed by the AI based platforms and people should be aware of what and how are they going to do with that piece of information? The following points could betaken into account for protecting the right to privacy and safeguard the integrity of any individual/organisation:
i. One’s personal data should be fairly used with loyalty, providing necessary details of the user’s data will be used.
ii. Data transparency must be rendered to draw closer between user and personal information being held in the grip ofAI-assisted platforms.
iii. Privacy officers should be rooted in the system to prevent the data being used for commercial purpose, political alacrity and criminal activities.
iv. Rigorous laws will form the structure of a data protected environment ensuring safeguard of data privacy.
v. Informing and spreading awareness in the public for being hypervigilant to the data they enter in social media or any other platform, this will also help in countering the deep fakes by informing how to detect deep fakes.
Comentários