AI cars must be banned: Robotaxi ran over & dragged pedestrian

vokazu

Legend
Before more people get killed, we should stop AI cars.



Cruise recalls entire fleet after robotaxi ran over, dragged pedestrian​

Kirsten Korosec@kirstenkorosec / 1:51 AM GMT+10•November 9, 2023

GettyImages-1189070710.jpg

Image Credits: Andrej Sokolow/picture alliance / Getty Images

GM self-driving car subsidiary Cruise issued a recall for 950 vehicles equipped with its autonomous vehicle software following a crash that left a pedestrian, who had initially been hit by a human-driven car, stuck under and then dragged by one of the company’s robotaxis.

The company said in a blog post and in the recall notice filed with the National Highway Traffic and Safety Administration that it issued the recall after an analysis of the robotaxi’s response October 2 found the “collision detection subsystem may cause the Cruise AV to attempt to pull over out of traffic instead of remaining stationary when a pullover is not the desired post-collision response.”

In that October incident, a pedestrian was struck by a human driver and then landed in the adjacent lane where a Cruise robotaxi was driving. The robotaxi initiated its brakes and came to a stop with the pedestrian under the vehicle. The robotaxi then attempted to pull over, dragging the woman some 20 feet.


“Although we determined that a similar collision with a risk of serious injury could have recurred every 10 million – 100 million miles of driving on average prior to the software update, we strive to continually improve and to make these events even rarer. As our software improves, it is likely we will file additional recalls to inform both NHTSA and the public of updates to enhance safety across our fleet,” the company wrote in the blog post.

Cruise and its parent company GM had been under increased scrutiny for weeks following several incidents, including a collision with an emergency response vehicle. The opposition turned into regulatory action, however, following the October 2 event.

The California Department of Motor Vehicles and the state’s Public Utilities Commission pulled all permits that allowed Cruise to commercially operate a fleet of robotaxis on public roads in San Francisco. Two days later, the company paused all driverless testing and operations across its fleet, which included Austin, Houston and Phoenix. It also halted production of its custom-built Cruise Origin vehicles.

The company has been in crisis mode ever since, particularly as new reports have emerged that expose the company’s safety practices and potential flaws in its software. Cruise is also facing federal investigations into how autonomous vehicles interact with pedestrians.

Cruise said in its blog post, confirming earlier reports, it is conducting a search to hire a chief safety officer who will report directly to the CEO, hired Exponent to conduct a technical review and retained law firm Quinn Emanuel to examine Cruise’s response to the October 2 incident, including the company’s interactions with law enforcement, regulators and the media.
 
GM had remote workers taking over control of the cars. They took over every 2.5 to 5 miles of driving. One remote worker was in charge of 15 cars.

 
Before more people get killed, we should stop AI cars.



Cruise recalls entire fleet after robotaxi ran over, dragged pedestrian​

Kirsten Korosec@kirstenkorosec / 1:51 AM GMT+10•November 9, 2023

GettyImages-1189070710.jpg

Image Credits: Andrej Sokolow/picture alliance / Getty Images

GM self-driving car subsidiary Cruise issued a recall for 950 vehicles equipped with its autonomous vehicle software following a crash that left a pedestrian, who had initially been hit by a human-driven car, stuck under and then dragged by one of the company’s robotaxis.

The company said in a blog post and in the recall notice filed with the National Highway Traffic and Safety Administration that it issued the recall after an analysis of the robotaxi’s response October 2 found the “collision detection subsystem may cause the Cruise AV to attempt to pull over out of traffic instead of remaining stationary when a pullover is not the desired post-collision response.”

In that October incident, a pedestrian was struck by a human driver and then landed in the adjacent lane where a Cruise robotaxi was driving. The robotaxi initiated its brakes and came to a stop with the pedestrian under the vehicle. The robotaxi then attempted to pull over, dragging the woman some 20 feet.


“Although we determined that a similar collision with a risk of serious injury could have recurred every 10 million – 100 million miles of driving on average prior to the software update, we strive to continually improve and to make these events even rarer. As our software improves, it is likely we will file additional recalls to inform both NHTSA and the public of updates to enhance safety across our fleet,” the company wrote in the blog post.

Cruise and its parent company GM had been under increased scrutiny for weeks following several incidents, including a collision with an emergency response vehicle. The opposition turned into regulatory action, however, following the October 2 event.

The California Department of Motor Vehicles and the state’s Public Utilities Commission pulled all permits that allowed Cruise to commercially operate a fleet of robotaxis on public roads in San Francisco. Two days later, the company paused all driverless testing and operations across its fleet, which included Austin, Houston and Phoenix. It also halted production of its custom-built Cruise Origin vehicles.

The company has been in crisis mode ever since, particularly as new reports have emerged that expose the company’s safety practices and potential flaws in its software. Cruise is also facing federal investigations into how autonomous vehicles interact with pedestrians.

Cruise said in its blog post, confirming earlier reports, it is conducting a search to hire a chief safety officer who will report directly to the CEO, hired Exponent to conduct a technical review and retained law firm Quinn Emanuel to examine Cruise’s response to the October 2 incident, including the company’s interactions with law enforcement, regulators and the media.
Poor reporting. The article does not mention whether or not the pedestrian survived.
 
They are updating the software so the car remains still instead of pulling over. That suggests the pedestrian survived.
 
The main question in my mind about AI and the electrification of everything is whether it can be trusted and whether individual freedom and privacy can be guaranteed and in my opinion the answer is No.
There are benefits to automated cars such as offering mobility to people who cannot drive due to disability, age or some medical condition and for that it could be a good thing but even now you could still employ people that work is care givers as we have now.
My problem with all the electrification of everything has tended towards greater centralisation of power and I don’t trust that.
I think with everything we do there should be optionality and back up plans;
 
The main question in my mind about AI and the electrification of everything is whether it can be trusted and whether individual freedom and privacy can be guaranteed and in my opinion the answer is No.
There are benefits to automated cars such as offering mobility to people who cannot drive due to disability, age or some medical condition and for that it could be a good thing but even now you could still employ people that work is care givers as we have now.
My problem with all the electrification of everything has tended towards greater centralisation of power and I don’t trust that.
I think with everything we do there should be optionality and back up plans;

Electrification happened a century ago. I think you are referring to digitisation or some such. It's the software that creates the problem and not the power source.
 
They are updating the software so the car remains still instead of pulling over. That suggests the pedestrian survived.
If the pedestrian survived, in what condition? I assume that she was injured, but maybe not?. If not, I want to hire her for my body guard because she sounds indestructible.
 
The automated vehicle did not cause the accident. The pedestrian was thrown into its path. Instead of remaining stationary, it moved to the curb to protect the occupants rather than the victim!


Last month, a pedestrian in San Francisco was struck by a hit-and-run driver and thrown into an adjacent lane and was hit a second time by a Cruise robotaxi that was not able to stop in time and then dragged the pedestrian.

The recall addresses circumstances when the software may cause the Cruise AV to attempt to pull over out of traffic instead of remaining stationary "when a pullover is not the desired post-collision response," Cruise said.
 
Last edited:
The automated vehicle did not cause the accident. The pedestrian was thrown into its path. Instead of remaining stationary, it moved to the curb to protect the driver rather than the victim!


Last month, a pedestrian in San Francisco was struck by a hit-and-run driver and thrown into an adjacent lane and was hit a second time by a Cruise robotaxi that was not able to stop in time and then dragged the pedestrian.

The recall addresses circumstances when the software may cause the Cruise AV to attempt to pull over out of traffic instead of remaining stationary "when a pullover is not the desired post-collision response," Cruise said.
What we really need is the ability to do a recall of bad batches of idiot humans when they cause serious accidents.
 
I suspect that autonomous vehicles are already far safer than human drivers. Need to make sure humans don't get to drive.
Indeed yes. Driverless vehicles despite the accidents are probably safer than vehicles driven by humans. The few odd accidents of autonomous vehicles keep making headlines while hundreds of drunk drivers, fatigued human drivers killing many, rarely make the news. On the other hand, if all vehicles were autonomous, the algorithms will actually get even better because there is no longer the code piece for moronic drivers on the road.
 
I'm not quite sure why America allows for such experimentation on public roads.
Student drivers are often testing on public roads and causing accidents. A friend’s father hit his head on a curb and died when he was on his morning walk because an idiot learner on his motorbike didn’t know how to navigate a bend.
 
Student drivers are often testing on public roads and causing accidents. A friend’s father hit his head on a curb and died when he was on his morning walk because an idiot learner on his motorbike didn’t know how to navigate a bend.

Learning under professional guidance?
 
Indeed yes. Driverless vehicles despite the accidents are probably safer than vehicles driven by humans. The few odd accidents of autonomous vehicles keep making headlines while hundreds of drunk drivers, fatigued human drivers killing many, rarely make the news. On the other hand, if all vehicles were autonomous, the algorithms will actually get even better because there is no longer the code piece for moronic drivers on the road.

The driverless car in this case couldn't make the right call between remaining stationary and moving to the kerb. A driver would have made the right call,
 
The driverless car in this case couldn't make the right call between remaining stationary and moving to the kerb. A driver would have made the right call,
What if that driver is underaged, drunk, distracted by his handheld device….
 
Learning under professional guidance?
I don’t know about the driver who got off light. I know my friend who became fatherless. The point remains. Humans try to bend rules, are reckless. Driverless cars unless manipulated at the algorithm/sensor level are not malicious.
 
What if that driver is underaged, drunk, distracted by his handheld device….

Sometimes the driver would have done the wrong thing. The algorithm means that the car mostly does the wrong thing. I think road and traffic-shared spaces are too difficult for driverless currently.
 
Sometimes the driver would have done the wrong thing. The algorithm means that the car mostly does the wrong thing. I think road and traffic-shared spaces are too difficult for driverless currently.
What would be the statistically acceptable tolerance?

If today’s driverless vehicles have a relative kill rate of, say, 0.0001 of the kill rate of human drivers, would you be satisfied with a rate of 0.00001?
 
Sometimes the driver would have done the wrong thing. The algorithm means that the car mostly does the wrong thing. I think road and traffic-shared spaces are too difficult for driverless currently.
Thats a different issue. There is nothing today to show that the driverless car is inherently more dangerous than a human when humans are responsible for hundreds of fatal road accidents every year. It is the fear of unknown and nothing else.
 
Thats a different issue. There is nothing today to show that the driverless car is inherently more dangerous than a human when humans are responsible for hundreds of fatal road accidents every year. It is the fear of unknown and nothing else.

Faced with a unique situation this driverless car was programmed to make the wrong choice. But it's clear that driverless are better than driven cars overall.

It seems to me that the best use for these vehicles presently is on a highway rather than on a city street.

The other question is legal responsibility for damages and injury.
 
Faced with a unique situation this driverless car was programmed to make the wrong choice. But it's clear that driverless are better than driven cars overall.

It seems to me that the best use for these vehicles presently is on a highway rather than on a city street.

The other question is legal responsibility for damages and injury.
The best use is surely on highways, at least as a first step. I think driverless trucks are already being considered by big companies such as Walmart. They will use these trucks for warehouse to distribution location. Last mile delivery will be done by human. That is already a great start. As with everything, there will be iterations and things will get better so much so that driverless will be ubiquitous on city’s roads too.
 
The other factor that is an issue is that you have a driverless/driven environment. It would be best if you could switch to driverless overnight as Sweden did with right-hand drive.
 
public transit please....though America has so much money invested in roads for cars, I guess banging our heads against the wall for EVs and robocars is the only solution leadership is willing to invest in
 
public transit please....though America has so much money invested in roads for cars, I guess banging our heads against the wall for EVs and robocars is the only solution leadership is willing to invest in
Indonesia launches a bullet train service before USA does.
 
Before more people get killed, we should stop AI cars.



Cruise recalls entire fleet after robotaxi ran over, dragged pedestrian​

Kirsten Korosec@kirstenkorosec / 1:51 AM GMT+10•November 9, 2023

GettyImages-1189070710.jpg

Image Credits: Andrej Sokolow/picture alliance / Getty Images

GM self-driving car subsidiary Cruise issued a recall for 950 vehicles equipped with its autonomous vehicle software following a crash that left a pedestrian, who had initially been hit by a human-driven car, stuck under and then dragged by one of the company’s robotaxis.

The company said in a blog post and in the recall notice filed with the National Highway Traffic and Safety Administration that it issued the recall after an analysis of the robotaxi’s response October 2 found the “collision detection subsystem may cause the Cruise AV to attempt to pull over out of traffic instead of remaining stationary when a pullover is not the desired post-collision response.”

In that October incident, a pedestrian was struck by a human driver and then landed in the adjacent lane where a Cruise robotaxi was driving. The robotaxi initiated its brakes and came to a stop with the pedestrian under the vehicle. The robotaxi then attempted to pull over, dragging the woman some 20 feet.


“Although we determined that a similar collision with a risk of serious injury could have recurred every 10 million – 100 million miles of driving on average prior to the software update, we strive to continually improve and to make these events even rarer. As our software improves, it is likely we will file additional recalls to inform both NHTSA and the public of updates to enhance safety across our fleet,” the company wrote in the blog post.

Cruise and its parent company GM had been under increased scrutiny for weeks following several incidents, including a collision with an emergency response vehicle. The opposition turned into regulatory action, however, following the October 2 event.

The California Department of Motor Vehicles and the state’s Public Utilities Commission pulled all permits that allowed Cruise to commercially operate a fleet of robotaxis on public roads in San Francisco. Two days later, the company paused all driverless testing and operations across its fleet, which included Austin, Houston and Phoenix. It also halted production of its custom-built Cruise Origin vehicles.

The company has been in crisis mode ever since, particularly as new reports have emerged that expose the company’s safety practices and potential flaws in its software. Cruise is also facing federal investigations into how autonomous vehicles interact with pedestrians.

Cruise said in its blog post, confirming earlier reports, it is conducting a search to hire a chief safety officer who will report directly to the CEO, hired Exponent to conduct a technical review and retained law firm Quinn Emanuel to examine Cruise’s response to the October 2 incident, including the company’s interactions with law enforcement, regulators and the media.
Aii cars can cause death potentiaiiy.
What about that?
 
Jeremy Clarkson said on Top Gear a long time ago that the AI in the car might decide that the drivers life was less valuable than the 3 pedestrians it was going to run over and that's why he would never buy one.
 
public transit please....though America has so much money invested in roads for cars, I guess banging our heads against the wall for EVs and robocars is the only solution leadership is willing to invest in
No f'n thanks. I don't want to get on a public transport with the ferals.
 
No f'n thanks. I don't want to get on a public transport with the ferals.
I watched the clip. It sounds like they are planning to initiate “bystander training” so that witnesses will feel more confident to step into the fight arena.

Also, that news guy needs to do something about his eye brows. disturbing.
 
Driverless trains like the ones in Singapore are probably safer than driverless cars, because those driverless trains run on a fixed track and there is no traffic obstacles. I guess driverless trains are easier to be programmed or monitored.
 
I am pretty sure that AI cars are better drivers than humans who get tired, sleepy, drunk, high, distracted etc. On the other hand, our tolerance for human mistakes is much higher than what we have for programming/machine algorithm mistakes and that is why we get horrified by news as mentioned in the OP.

’Shark kills human‘ is news even though it happens less than ten times a year globally while ‘Humans kill sharks to the verge of extinction’ is not news even through it happens 100 million times every year.
 
I am pretty sure that AI cars are better drivers than humans who get tired, sleepy, drunk, high, distracted etc. On the other hand, our tolerance for human mistakes is much higher than what we have for programming/machine algorithm mistakes and that is why we get horrified by news as mentioned in the OP.

’Shark kills human‘ is news even though it happens less than ten times a year globally while ‘Humans kill sharks to the verge of extinction’ is not news even through it happens 100 million times every year.
I agree with what you commented on. But I'm more irritated by the AI never going over the speed limit and perhaps going well below it in order for the computer to drive safe enough to meet it's programming ability.
 
Driverless Robotic vehicles have been delivering pizzas for 4 years now w/o killing any people. Well, perhaps the pizzas have killed a few ppl. AI pizza delivery in Texas (Houston) , Calif (San Jose, SF Bay Area, LA) and possibly other cities.

Domino_Nuro_R2_Robot.jpg


 
Last edited:
Back
Top