The green pixels in your final image do not represent evenly spaced locations on the original sensor, removing the unoccupied rows and columns from step 2 to step 3 means that the green pixels which all have the same spacing in the final image did not come from evenly spaced locations on the original sensor. You are correct that the final image after my scaling is not a Bayer pattern (but it can be stored as one, just place the blue pixels in their proper spot). But it is a pattern where the pixels of a given color are uniformly spaced which is the critical feature needed for reconstructing a proper RGB image. It also averages the closest set of 4 green pixels which is important for minimizing aliasing. Of course other more involved spatial filtering could be used before down-sampling for even better anti-aliasing performance.