Every autonomous landing in Space (Moon, Mars, etc) has to undergo the infamous “the 10 minutes of terror”, which corresponds to the phase of Entry, descent and landing (EDL), i.e., going from very fast speed (e.g., 13,000 mph) to zero, in perfect sequence, precision and timing… and the computer has to do it all by itself, with no help from the ground.
In order to increase precision of the EDL phase, big effort is currently placed in using vision-based navigation (VBN). The high accuracy achieved by these algorithms is the most important benefit. However, the biggest challenge with this approach is the low frame rate at which images are processed even with the most advanced on-board computers (OBC) that include an FPGA (programmable hardware device that can perform heavy computational tasks with low power consumption), which is around 0.5 frames per second (FPS). This brings a low degree of confidence in the overall navigation algorithm, therefore putting the whole mission at risk of failure.
Furthermore, the software development effort for the EDL phase is extremely difficult and the cost of achieving the necessary levels of determinism and efficiency is very high from the financial and timeline perspective.
Image data processing in Space
Several data processing solutions have been studied for image processing in-space and for VBN in particular:
- The use of FPGAs, i.e., programable hardware, can reduce power consumption and increase data processing, however high complexity in programming is the main drawback of FPGA.
- GPUs are probably the fastest data processors today. However, large power consumption, thermal load and performance bottlenecks in data transfer to GPU memory reduce their appeal for in-space applications.
- Software solutions in the host computer, are the most attractive solution due their programming simplicity and relatively good performance. However, power consumption is high and data processing performance is often not fast enough.