Thu. Jan 26th, 2023

Ethical issues

The digital divide

The digital divide is the gap that exists between individuals who have access to modern information and communication technology and those who lack access. There are three key stages that influence digital inequality worldwide.

Digital inequality is evident between communities living in urban areas and those living in rural settlements; between socioeconomic groups; between less economically developed countries and more economically developed countries; between the educated and uneducated population.
Individuals with access to a broadband connection can be digitally split. How? Low-performance computers, limited broadband speeds and limited access to subscription-based content widen the gap.

3 Types of Digital Divide


There are numerous types of the digital divide that influence our efforts in accessing the internet. Some of the vivid gaps in digital inequality include:

  1. Gender Divide
    According to a 2013 report, the internet gender gap is striking especially in developing countries. Though mobile connectivity is spreading drastically, it is not spreading equally. Women are still lagging.

Men in low-income countries are 90% more likely to own a mobile phone than women. This translates to 184 million women who lack access to mobile connectivity. Even among women owning mobile phones, 1.2 billion women in low and mid-income countries have no access to the internet.

  1. Social Divide
    Internet access creates relationships and social circles among people with shared interests. Social media platforms like Twitter and Facebook create online peer groups based on similar interests.

More than ever internet usage has influenced social stratification which is evident in societies among those that are connected to the internet and those that are not. Non-connected groups are sidelined since they don’t share in the internet benefits of the connected groups.

  1. Universal Access Divide
    Individuals living with physical disabilities are often disadvantaged when it comes to accessing the internet. They may have the necessary skills but cannot exploit the available hardware and software.
    Some parts of the world will remain segregated from the internet and its vast potential due to lack of digital literacy skills, low education levels, and inadequate broadband infrastructure.

Read more here.

The elderly and those with disabilities

The older generation are often distrustful of new technologies, and also lack the background knowledge required to make sense of new technologies. This is such a profound issue that there are many websites and training programs dedicated to helping older users of technology. Age Support

Disabled users also face barriers to use of technology. For example, blind users are unable to view content on web sites. In many cases, accessibility aids can help: a screen reader can narrate the text from a screen in order to help users whose eyesight precludes them from viewing information. The designers of the websites can assist in using accessibility tags – for example, when adding images, a text description of the image should also be added. This can be read out to ensure that the content is still accessible.

Other examples of accessibility aids include screen magnifiers, sticky keys, and keyboard shortcuts (for users who struggle with pointing devices). Advances in technology and processing power have also allowed for increased use of speech recognition.

Privacy issues

The ever-increasing collection of personal data means that the number of privacy issues faced by consumers will only rise. We have gone from possibly having a handful of usernames and passwords for email, maybe banking and a social network, to having many interconnected systems which all collect data, share the data with each other, and store it online. In addition, the type of data being collected has become more personal. Shopping habits, browsing habits, interests, where you have clicked on a page, huge tranches of health data (your heart rate, location etc) can all be stored and accessed by companies – and hackers.

Protection of personal data

Read here about protection of personal data.

Legal and ethical considerations

Computer misuse act

Risk of loss of control of personal data stored online

Data held online is only as secure as the credentials used to control access to them. Emails, images, documents and more are stored online, and often the only thing keeping them private is the user’s credentials. If these are compromised then it goes without saying that the contents can be utilised by malicious actors.

Additionally, digital artifacts are easy to copy. Regardless of whether a password is compromised, or only briefly so, documents can be duplicated and once additional copies have been made, there is no scope for the rightful owner to take back control of that data.

Possible dangers of artificial intelligence

Artificial intelligence is the use of machine learning to identify patterns or outcomes in large data sets. For example, it can be used to identify the pattern between current weather, distant weather systems, and the most likely future weather.

In order for these systems to work, they are trained using data with known outcomes. The better the training data, the more accurate the output from the model. This raises a major issue: bias can be introduced into artificial intelligence models simply through the use of biased training data. This may not be intentional: it may be that the only training data available is biased.

This famously happened at Amazon when they tried to use AI/ML to screen potential job applicants. They matched up CVs with the performance of staff, and the model came to the conclusion that female staff were not good workers. This wasn’t because there was any evidence of this – it was because there were virtually no female staff, and therefore the model couldn’t make a correlation between working well and being female. The result of this was that the system screened out all female applicants. Fortunately, this was identified very quickly, and so CVs were assessed manually. However, it is a real danger that other examples like this will arise given the increase in use of AI/ML.

Dangers and ethical issues of robotic weapon systems

Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

US-based military robot manufacturer Ghost Robotics had strapped a sniper rifle to a robotic dog, in the latest step towards autonomous weaponry. Some people have reacted with moral outrage at the prospect of making a killer robot in the image of our loyal best friend. But if this development makes us pause for thought, in a way that existing robot weapons don’t, then perhaps it serves a useful purpose after all. The response to Ghost Robotics’ latest creation is reminiscent of an incident involving Boston Dynamics, another maker of doglike robots (which, in contrast, strongly frowns on the idea of weaponising them).

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Autonomous Weapons

Autonomous weapon systems are not exactly new to military or warfare. The evolution of Unmanned Aerial Vehicles (UAV) or drones has been very rapid in the last decade and these have been put to use in various military applications. Images of terrorists being neutralised in Afghanistan or Iraq by combat UAVs became common, though the latest attacks of oilfields in Saudi Arabia by drone swarms by terror groups made the world leaders sit up and take notice of the evolving threats. While the UAVs and drones used to operate under human control / command or on a pre fed mission profile, the disruptions caused by development of Artificial Intelligence (AI) / Machine Language (ML) opened up the possibility of making these autonomous. Close on the heels of UAVs, Unmanned Ground Vehicles (UGV) also gained prominence, both for combat as well as logistic uses in battlefields. These platforms mounted with guns and sensors integrated with AI modules make them autonomous and capable of making decisions as per scenarios presented.

Why Autonomous Weapons Pose a Human Rights Dilemma?

Human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of pre-emptive attacks, and because they could become combined with chemical, biological, radiological and nuclear weapons themselves.

The main problems associated with a robotic or autonomous weapon system can be summarised as:

  • Problem of Misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat? The problem here is not that these machines will make such errors and humans won’t. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent U.S. drone strike in Afghanistan seem like mere rounding errors by comparison. The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them.
  • Low End Proliferation. The militaries developing autonomous weapons are proceeding on the assumption that they will be able to contain and control the use of autonomous weapons. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread. Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.
  • High End Proliferation. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of mounting chemical, biological, radiological and nuclear arms. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use. High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the cost-benefit analysis that nations undergo while launching and maintaining wars.
  • Laws of Armed Conflict (LOAC). Autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the LOAC. These laws, codified in treaties reaching as far back as the 1864 Geneva Convention, are the international thin blue line separating war with honour from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Non-governmental organizations and experts in international law worry that autonomous weapons will lead to a serious accountability gap.

AI – The Double-Edged Sword

Artificial Intelligence is the branch of computer science concerned with making computers behave like humans. According to a recent United Nations report, Libyan government forces hunted down rebel forces using “lethal autonomous weapons systems” that were “programmed to attack targets without requiring data connectivity between the operator and the munition”. The deadly drones were Turkish-made quadcopters about the size of a dinner plate, capable of delivering a warhead weighing a kilogram or so.

Artificial intelligence researchers have been warning of the advent of such lethal autonomous weapons systems, which can make life-or-death decisions without human intervention, for years. Well before the first test of a nuclear bomb, many scientists working on the Manhattan Project were concerned about future of nuclear weapons. A secret petition was sent to President Harry S. Truman in July 1945. It accurately predicted the future scenario. The threat comes this time from artificial intelligence, and in particular the development of lethal autonomous weapons: weapons that can identify, track and destroy targets without human intervention. The media often like to call them “killer robots”.

The regulation of AI must consider the harms and the benefits of the technology. Harms that regulation might seek to legislate against include the potential for AI to discriminate against disadvantaged communities and the uncontrolled development of autonomous weapons. Sensible AI regulation would maximise its benefits and mitigate its harms.

The Future

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable. The endpoint of such a technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

Strategically, autonomous weapons are a military dream. They let a military scale its operations unhindered by manpower constraints. One programmer can command hundreds of autonomous weapons. An army can take on the riskiest of missions without endangering its own soldiers. Beyond the moral arguments, there are many technical and legal reasons to be concerned about killer robots. One of the strongest is that they will revolutionise warfare. Autonomous weapons will be weapons of immense destruction. Previously, nations had to have an army of soldiers to wage war. An army needs to be persuaded to follow orders, trained, paid, fed and maintained. Now just one programmer could control hundreds of weapons.

Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. An analogy could be the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the blowback experienced around the world today, multiply that by every country currently aiming for high-end autonomous weapons.

World stands at a crossroads on this issue. It needs to be seen as morally unacceptable for machines to decide who lives and who dies. The diplomats at the UN have to negotiate a treaty limiting their use, just as we have treaties to limit chemical, biological and other weapons to prevent potentially disastrous situations. With the US competing with China and Russia to achieve “AI supremacy” – a clear technological advantage over rivals – regulations have thus far taken a back seat.

Conclusion

In an era where advances in weapon technology are taking place at breakneck speed and where any such technological edge can exponentially increase or bolster a Nation’s chances of success in conflict/ at the bargaining table,  the advent of robotic weapons represents a ‘redshift’ in combat capabilities, with the caveat that such disruptive technology needs to be kept out of the hands of irresponsible players represented by ‘rogue nations or non-state elements,  in order that such technology does not point towards Armageddon, in any sense of the word.

Negative issues related to internet use

There are many positive benefits of a world with Internet access. It has enabled myriad new services and businesses that just couldn’t exist without it.

It also causes many negative problems:

  • Addiction – use of the internet to enable gambling or consumption of pornography. While perfectly legal, addictions can cause problems in people’s everyday lives. Being able to access material like this online means that an addict can search for media 24 hours a day.
  • Illegal activity – many copyrighted materials, such as videos and music, are available online from shady websites or other sources. These sites facilitate copyright theft, and often host malware either in addition to the material being hosted, or disguised as the desired material.
  • Darknet – the darknet, or dark web, was developed by the US military (DARPA) in order to facilitate secure and resilient communication over the Internet. The technologies used make it very difficult (supposedly impossible, but a number of high profile cases have shown that it isn’t) to identify where material is being hosted, or who is viewing it. For this reason, it is used not only for evading censorship, but also for hosting illegal stores and hiding illegal activity – such as selling drugs, weapons, and paedophilia.
  • Terrorist use – with modern applications supporting end-to-end encryption (no-one but the sender and receiver can view the content of messages) and the ease of implementation of encryption algorithms (readily available libraries), it is trivial to exchange data online, quickly, cheaply and anonymously, with the contents of those messages secured. It is therefore inevitable that this is the preferred method of communication for terrorists. Obtaining chat logs is far more difficult for internet applications that it is for phone lines.
  • Spreading misinformation – the speed and ease with which material can be shared and re-shared makes the Internet the perfect medium for dissemination of information. This has been highlighted many times, most recently in the spread of conspiracy theories surrounding vaccinations and the Covid pandemic.