I wonder if “Do no harm” is too restrictive - AI that stops illegal resource harvesting is good even if those doing it are harmed.
Hello @toontalk
That’s a thought-provoking situation, and I think we might consider potential trade-offs between the illegal activities that harm the environment and the well-being of communities
“Do not harm” may be a useful dictum for the ethical provision of medicine, since Drs can, if any treatment is potentially harmful, opt to do nothing. (With the assertion that acts of omission cannot be deemed harmful.) But in matters of global public welfare such as climate protection, where people all over the world are affected differentially, and there are multiple objectives (local air and water quality, biodiversity, GHG emissions, economic cost, jobs, equity/justice, etc.), “Do no harm” is probably a recipe for paralysis.
I too had wondered something similar, and posed a question to ChatGPT 4 after some initial prompting. I do think of “do no harm” as a guiding ethos and not something that is concrete and absolute. For example training an image recognition model that detects coca plantations from aerial/satellite imagery for eradication; how do we measure the harm that occurs to the farmers? I think there’s a number of questions that also need to be considered before deploying the solution such as:
- Why are they growing these crops in the first place? (money, threats of violence by traffickers, necessity, etc.)
- What can be done to mitigate the “harm” if unavoidable? (provide incentives, policy change, education)
The definition of “harm” is going to be subjective, and not something that is going to be easily defined without some serious thinking of what is the root cause(s) of the problem(s) that are being explored as potential candidates for AI solutions, and I think that is the point of focusing on “do no harm” in the use of AI for Good; it’s not just blindly reaching for the hammer when everything starts looking like nails when standing across the room.
How does the principle of “do no harm” work when making an AI to detect illegal logging activities? Is there harm done to even a minority of people (i.e. the illegal loggers) that outweighs the potential benefits of detecting these activities that mitigate the “do no harm” ethos?
The principle of “do no harm” in AI is closely related to ethical considerations, ensuring that all individuals impacted by the deployment of an AI system are left better off or, at the very least, unharmed. This principle becomes complex when it comes to applications such as detecting illegal logging activities, where there is a clear societal and environmental benefit, yet a minority group (the illegal loggers) may be negatively affected.
First, it’s important to understand that “do no harm” doesn’t mean “do no harm to anyone under any circumstances”. It’s a principle based on minimizing harm and maximizing benefits, particularly from a societal and environmental perspective. In this case, detecting and preventing illegal logging has broad societal benefits including preservation of biodiversity, mitigation of climate change, and protection of the rights of indigenous communities who often suffer from illegal logging practices.
The “harm” to illegal loggers is perceived as a necessary negative impact to prevent broader environmental and social harm. Moreover, the harm is a consequence of their illegal activities, which are harmful in themselves. The AI system isn’t causing harm but facilitating the enforcement of existing laws and regulations intended to protect the environment and society.
However, the implementation of such a system should still adhere to privacy and fairness principles. For instance, it should not be used to discriminate, harass, or unjustly penalize individuals or groups. If, for example, the system was used to unfairly target specific ethnic or social groups, that would be a violation of the “do no harm” principle.
Additionally, it’s essential to consider the socioeconomic factors that lead people to illegal logging. If the implementation of the AI system results in job loss for communities who rely on logging due to a lack of alternatives, it would be crucial to accompany the system with policies or initiatives aimed at providing alternative means of livelihood for these communities. This way, the “do no harm” principle extends beyond just deploying the AI system and into considering the broader socio-economic impacts and responses.
Lastly, the principle of “do no harm” also suggests we should work to mitigate any unintended consequences. For instance, if detection of illegal logging activities inadvertently leads to increased surveillance or privacy infringements of unrelated third parties, these risks should be proactively addressed to ensure the system is as harm-free as possible.
I too feel the bigger picture should be considered as illegal resource harvesting of any nature or growing narcotic crops or perhaps human slavery of any sort (sexual, labor etc) does do harm to the country and humanity and the world in general. I am aware also that people do rely on this sort of illegal activities to survive, but isn’t that a social problem for the respective governments and perhaps aid agencies or other agencies to tackle?