Lavender AI System Used in Israel’s War Against Hamas Sparks Concerns About Civilian Deaths

Tel Aviv, Israel – Experts have long cautioned about the risks associated with utilizing artificial intelligence in warfare, highlighting concerns beyond the Hollywood-esque portrayal of autonomous killer robots. Recent reports out of Gaza suggest that Israel has put these warnings into practice with an AI-based system known as Lavender, used by the Israeli Defense Forces to identify targets for assassination.

According to Israeli publications +972 and Local Call, Lavender was trained using various data sources such as photos, cellular information, communication patterns, and social media connections to identify individuals affiliated with Hamas and Palestinian Islamic Jihad. This dataset reportedly even included some Palestinian civil defense workers. The system then assigned a score to individuals in Gaza based on their characteristics, with high scores marking them as potential targets for assassination, resulting in a list that at one point numbered around 37,000 individuals.

Despite knowledge that the system was only 90% accurate in identifying militants, sources within Israeli intelligence revealed that there was minimal human review conducted. The lengthy list of targets led to quick verifications, often simply confirming the gender of the individual. This hasty process, combined with a decision by the IDF to accept collateral damage of up to 15-20 people in each strike, contributed to the high civilian death toll in Gaza during the conflict.

The IDF, in response to the published reports, denied using AI systems to identify terrorist operatives, instead describing Lavender as a database for cross-referencing intelligence sources. The military maintained that strikes were directed towards military targets in accordance with international rules of proportionality, with thorough investigations conducted in case of exceptional incidents.

The utilization of AI systems like Lavender raises ethical questions about the intersection of technology and warfare. As discussions around responsible military use of artificial intelligence continue, the international community faces challenges in ensuring compliance with humanitarian law and ethical considerations in the development and deployment of AI systems in conflict zones.

The revelations from Gaza may prompt further calls for global treaties regulating the use of AI technology in warfare, with the hope of minimizing civilian casualties and promoting responsible military practices. As nations grapple with the implications of AI in modern warfare, the need for transparent and ethical guidelines becomes increasingly apparent.