Data integrity is a critical component that any company that wishes to remain in business must keep in mind, as the cost of neglecting their data is far more than an abstract loss of information or files.
Besides breaking data protection principles in a way that could potentially be illegal depending on the type of data lost or compromised, as well as the general disruption of recovering data from backups and redoing existing work, data quality failures can bleed over into aspects of people’s lives that are often not appreciated.
Whether it is poor quality data input, data hosting or data output, here are some of the most devastating disasters caused by issues in data quality, and what lessons we can learn from these issues.
The Most Expensive Corrective Lens
The Hubble Space Telescope is one of the most important satellites ever launched into space, with its extremely powerful lens changing human understanding of the nature of the universe, particularly when it comes to its expansion.
It was only after it was finally launched following years of delays in 1990 that an issue with the calculations and data finally revealed itself. The shape of the primary mirror that allowed the telescope to see vast distances ahead was warped by a factor imperceptible to the human eye.
This meant that every picture it took and relayed to NASA Mission Control was blurry and close to unusable. A slight data quality error had led to one of the greatest human inventions in history becoming a $2.1bn chunk of space debris.
To fix this, STS-61 was launched with a special corrective lens commonly nicknamed a “contact lens” which ultimately made it function the way it was always meant to, at a cost of $50m.
Poor Data Input Killed Hundreds Of People
For a century, Boeing was seen as the king of the skies, and the creators of some of the best and most well-known passenger airplanes ever built.
It took two miscalibrated sensors to destroy that reputation forever, have criminal charges raised against the company, and taint the entire sector by association.
At the center of this issue is the Boeing 737 MAX, an upgraded version of the popular airliner first constructed in 1967.
The MAX model was first operated half a century later in 1967, with more efficient engines, changes to the aerodynamics and modifications to the frame, as well as the implementation of a flight stabilizing system known as the Maneuvering Characteristics Augmentation System (MACS),
Designed to fix an aerodynamic flaw with the MAX that would have required additional training, MACS featured a single angle of attack sensor that would force the plane to point down if the angle was deemed to be too high.
The problem was that there was no redundancy nor contingency, so if the single data malfunctioned or provided data that was inaccurate, it could cause the plane to nosedive into the ground.
This happened twice, killing everyone on board in both cases.
The first, Lion Air Flight 610 on 29th October 2018, was caused by a replacement sensor that was believed to have been miscalibrated. The data integrity was not checked, and the result was a catastrophic crash off the coast of the island of Java near Indonesia.
The second was Ethiopian Airlines Flight 302 on 10th March 2019. Two minutes after taking off from Addis Ababa, the MCAS system activated, pitching the plane into a nosedive. Despite the best efforts of the pilot to control the system, the plane crashed just six minutes after takeoff, killing everyone on board.
These two crashes, both caused by data integrity issues, became intensely revealing of the priorities of the company, which has led to further investigations in light of the otherwise unrelated Alaskan Airlines Flight 1282 incident.