An image of a sled test simulation using the BioRID-II FE model for LS-DYNA software compared with the IIHS sled and BioRID dummy
By Marcy Edwards
Senior Research Engineer, IIHS
The scenario isn’t hard to imagine. A driver is stopped in traffic when suddenly another vehicle slams into them from behind. The person in the struck vehicle — especially if she’s a woman — is now at risk of debilitating neck pain that could interfere with work and life for months to come and lead to enormous medical bills.
Though rarely life-threatening, neck sprains and strains, which typically occur in rear-end crashes, are the most frequently reported injuries in U.S. auto insurance claims. For this reason, they were one of the first types of injuries that IIHS turned its attention to when we started testing vehicles for crash protection.
By any measure, our head restraint ratings were a success. They led to improvements in seat and head restraint designs that have shown up in real-world crash statistics. But with nearly all vehicles now earning good ratings, our tests no longer differentiate among vehicles to encourage further improvement. And though the neck injury problem has lessened, it hasn’t gone away.
My colleagues and I are currently developing a path for the future of IIHS rear-impact testing. Our long-term goal is to be able to evaluate how well each combination of a seat and head restraint protects people of all different sizes and shapes in a variety of seating positions and crash scenarios. Getting there is a multistep process and will require use of virtual testing with computer models as well as the conventional tools used in our original head restraint test. We’ll also need the help of scientists developing detailed computer models of the human body and the cooperation of vehicle manufacturers.
How we got here
When we began rating head restraints in 1995, we started with the basics. Neck injuries in rear impacts occur when the head lags behind the accelerating seat and torso. This lag can often be prevented by good head restraint geometry, so our first evaluations were simple measurements using a dummy representing a 50th-percentile man. A restraint should be at least as high as the head’s center of gravity, or about 3½ inches below the top of the head. The backset, or the distance from the back of the head to the restraint, should be as small as possible.
In 1995, only 3 percent of the head restraints we evaluated received good geometric ratings, while 82 percent were rated poor. Our ratings led manufacturers to pay attention to these measurements long before a 2010 government standard made good geometry a legal requirement.
Good geometry is necessary, but it’s not sufficient. Seats can differ in other ways too, such as structure placement, seatback stiffness and energy-absorbing properties, all of which can affect outcomes for occupants.
In 2004 we added a dynamic test for any vehicle with a good or acceptable geometric rating to evaluate how well the seat and head restraint managed crash energy and occupant motion. This test consisted of a simulated rear impact with the vehicle seat mounted to a sled. A special dummy known as BioRID, which has a realistic spine, was buckled in the seat. The pulse used in the test was equivalent to a rear-end crash with a velocity change of 10 mph, or a stationary vehicle being struck at 20 mph by a vehicle of the same weight.
The combination of our geometric ratings and our dynamic tests allowed us to identify the most effective head restraints. In a study of real-world crashes, injury rates were 15 percent lower for vehicles with good ratings compared with those rated poor, while long-term injuries, or those lasting three months or more, were 35 percent lower.
Beyond BioRID
Since we began dynamic testing, manufacturers have gotten very good at designing seats for the 10 mph velocity change, and today’s vehicles all perform well in that test. However, there are still differences in real-world performance. Insurance claims data collected by my colleagues at the Highway Loss Data Institute suggest that injury rates in rear-ended vehicles with good head restraint ratings vary widely.
So how can we design a new evaluation to better differentiate among restraints?
One relatively easy update we intend to make is to add a second dynamic test with a larger velocity change, since many real-world front-to-rear crashes occur at higher speeds. By adding a 15 mph test on top of the 10 mph one, we will be able to glean more information and encourage further progress. We plan to launch a new rating program based on the two tests within the next year or two.
Beyond test speed, other variables are harder to tweak. As is the case with all crash test dummies, BioRID has limited capabilities — for example, it’s only valid for fore-aft motion and lower-severity crashes — and doesn’t represent the diversity of the driving population. While it is an impressive tool with something that closely resembles a human spine, it represents the specific spine of a 50th percentile male.
Real-world injury data tell us that women are more likely than men to suffer neck injuries in crashes, but we don’t really know why. Researchers in Sweden are currently developing a female dummy for use in rear-impact testing, which could someday help us evaluate protection for women specifically. However, no matter how sophisticated, physical dummies can’t capture soft tissue and nerve damage, which may play a role in whiplash injuries and in the differences between men’s and women’s susceptibility to them.
Perhaps more importantly, any physical dummy, male or female, represents just one particular body type. On the other hand, computer models of the human body, which are currently under development, could be more easily varied to represent a range of body types and injury risk factors. Virtual testing with these models could someday soon provide us with the ability to see how seats and head restraints function for many different people sitting in different positions.
The path forward
How can we get from physical tests performed on actual seats to a much wider set of tests performed virtually? One big challenge is that while we can go out and purchase vehicles with physical seats to test, we cannot purchase computer models of those seats. Those models are the intellectual property of the manufacturers, so we’ll need their cooperation. At the same time, we’ll need to structure this cooperation so that it doesn’t compromise the independence of the testing program or the trust consumers place in the results.
We have a long and successful history of accepting test data from physical crash tests conducted by automakers for many of our ratings. In many cases, we allow established manufacturers to conduct tests according to our protocols and then supply us with video and other data so that we can verify the results and assign ratings. We randomly audit those manufacturer-conducted tests by repeating some of them in our own facility to make sure the outcomes match up.
We intend to build on that model as we branch out into virtual testing.
We’re planning to incorporate virtual tests into our head restraint ratings in stages. As a first step, we’ll give manufacturers the option of submitting virtual test data for our new 15 mph test and our established 10 mph test. This phase will help us get accustomed to working with virtual test data.
Later, we’ll expand the number of required test scenarios, potentially varying things like speed, seat position and occupant position — for example, a passenger leaning forward due to hard braking or a driver looking down at a phone in their lap. Our plan is to eventually expand the required virtual tests to include scenarios that can’t be tested in the real world because of the limitations of the dummies and other tools we have. This is where we’ll be able to evaluate performance with occupants of different body types and also in different positions that can’t be achieved by a physical dummy.
To make sure the virtual results for all these different test scenarios match reality, we’ll conduct physical tests for some of them. Replicating some results and then ensuring the same seat and dummy or human body models are used throughout the virtual evaluation will help us validate all the results, including the ones for scenarios we can’t physically test.
I’m looking forward to working with the industry to improve rear-impact protection for a diverse population of drivers and passengers with the help of virtual testing. The road map we’ve laid out will allow us to move into this new territory carefully and deliberately. And as we learn from this first experience, we’ll be able to apply this knowledge to other crash types, potentially spurring a wide array of vehicle safety improvements.
SOURCE: IIHS