CBS News
India’s top court slams states over “bulldozer justice,” razing of illegal homes that allegedly targets Muslims
New Delhi — India’s Supreme Court ruled Wednesday that authorities demolishing illegally-constructed homes and other properties belonging to suspected criminals is unconstitutional and must cease. The practice is allegedly used widely by several state governments to punish suspects outside the court justice process, and is commonly known as “bulldozer justice.”
“The executive can’t become a judge and decide that a person accused is guilty and, therefore, punish him by demolishing his properties. Such an act would be transgressing [the] executive’s limits,” the court said in a 95-page judgement.
The court issued its ruling in response to several petitions over a spate of home demolitions targeting suspected criminals in states governed by Prime Minister Narendra Modi‘s Bharatiya Janata Party (BJP) in recent years. Critics have accused the BJP state administrations of using bulldozer justice primarily to target Muslims — an accusation the party has repeatedly denied.
BJP state officials have argued that due process of law has been followed in carrying out the demolitions, but the court said authorities had adopted a “pick and choose” attitude toward illegally constructed homes, singling out those belonging to Muslims suspected of other crimes while sparing similar, but non-Muslim-owned illegal dwellings in the same area.
“In such cases, where the authorities indulge in arbitrary pick and choose of structures and it is established that soon before the initiation of such an action an occupant of the structure was found to be involved in a criminal case, a presumption could be drawn that the real motive for such demolition proceedings was not the illegal structure, but an action of penalizing the accused without even trying him before the court of law,” the court said.
One of the petitions to the Supreme Court was filed over the April 2022 demolition of dozens of homes belonging largely to Muslims following sectarian clashes in Delhi’s Jahangirpuri neighborhood, which drew allegations of religious discrimination and extrajudicial punishment.
“The chilling sight of a bulldozer demolishing a building… reminds one of a lawless state of affairs,” Justices B.R. Gavai and K.V. Viswanathan said in the court’s Wednesday judgment. “Our constitutional ethos and values would not permit any such abuse of power and such misadventures cannot be tolerated by the court of law.”
The court warned state authorities it would take action against officials found guilty of “such highhanded and arbitrary” actions and issued detailed guidelines for the demolition of homes constructed without the required permits.
The new guidelines make it mandatory for authorities to give at least 15 days’ advance notice to an occupant before an illegal home is demolished and to explain the reason for the building being razed.
The new guidelines state that occupiers of such properties must be given sufficient time to either remove the construction or challenge the demolition order in a court.
Authorities in five of India’s 28 states bulldozed 128 structures over the course of just three months in 2022, human rights group Amnesty International said in a report in February.
CBS News
How did The Onion’s Infowars acquisition go down and why?
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
CBS News
Google AI chatbot responds with a threatening message: “Human … Please die.”
A grad student in Michigan received a threatening response during a chat with Google’s AI chatbot Gemini.
In a back-and-forth conversation about the challenges and solutions for aging adults, Google’s Gemini responded with this threatening message:
“This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”
The 29-year-old grad student was seeking homework help from the AI chatbot while next to his sister, Sumedha Reddy, who told CBS News they were both “thoroughly freaked out.”
“I wanted to throw all of my devices out the window. I hadn’t felt panic like that in a long time to be honest,” Reddy said.
“Something slipped through the cracks. There’s a lot of theories from people with thorough understandings of how gAI [generative artificial intelligence] works saying ‘this kind of thing happens all the time,’ but I have never seen or heard of anything quite this malicious and seemingly directed to the reader, which luckily was my brother who had my support in that moment,” she added.
Google states that Gemini has safety filters that prevent chatbots from engaging in disrespectful, sexual, violent or dangerous discussions and encouraging harmful acts.
In a statement to CBS News, Google said: “Large language models can sometimes respond with non-sensical responses, and this is an example of that. This response violated our policies and we’ve taken action to prevent similar outputs from occurring.”
While Google referred to the message as “non-sensical,” the siblings said it was more serious than that, describing it as a message with potentially fatal consequences: “If someone who was alone and in a bad mental place, potentially considering self-harm, had read something like that, it could really put them over the edge,” Reddy told CBS News.
It’s not the first time Google’s chatbots have been called out for giving potentially harmful responses to user queries. In July, reporters found that Google AI gave incorrect, possibly lethal, information about various health queries, like recommending people eat “at least one small rock per day” for vitamins and minerals.
Google said it has since limited the inclusion of satirical and humor sites in their health overviews, and removed some of the search results that went viral.
However, Gemini is not the only chatbot known to have returned concerning outputs. The mother of a 14-year-old Florida teen, who died by suicide in February, filed a lawsuit against another AI company, Character.AI, as well as Google, claiming the chatbot encouraged her son to take his life.
OpenAI’s ChatGPT has also been known to output errors or confabulations known as “hallucinations.” Experts have highlighted the potential harms of errors in AI systems, from spreading misinformation and propaganda to rewriting history.
CBS News
Last two House Republicans who supported Trump impeachment to return
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.