Third, we began sifting through our transcribed interviews and field notes, as well as well as examining all the images captured by the interviewees. The coding process is very elaborate where we do what is called constant comparative method: we move the coded pieces of data around to organize them into categories. From there we link our categories to theories and write up our findings.
In the second phase of our research, we use machine learning and Google Vision to more carefully examine the images and uncover ways to automatically find images that might indicate that people need to be rescued. We collected hundreds of photos of what it looks like to be flooded, be rescued, and to rescue others. We keep those images private and make sure to de-identify them when they are included in our published research.